使用ES-Hadoop里面的ES-Spark插件,来完成使用spark想es里面大批量插入数据。
方法/步骤
1、启动es后,sparkshell导入es-hadoopjar包:cpelasticsearch-hadoop-2.1.2/dist/elasticsearch-spark*spark-1.6.0-bin-hadoop2.6/lib/cdspark-1.6.0-bin-hadoop2.6/bin./spark-shell--jars../lib/elasticsearch-spark-1.2_2.10-2.1.2.jar
2、互交结侈砉齿垃果如下:importorg.apache.spark.SparkConfimportorg.elasticsearch.spark._valc泠贾高框onf=newSparkConf()conf.set("es.index.auto.create","true")conf.set("es.nodes","127.0.0.1")valnumbers=Map("one"->1,"two"->2,"three"->3)valairports=Map("OTP"->"Otopeni","SFO"->"SanFran")sc.makeRDD(Seq(numbers,airports)).saveToEs("spark/docs")
3、然后查看ES中的数据:http://127.0.0.1:9200/spark/docs/_search?q=*
4、结果就是这样啦:{"took&孥恶膈茯quot;:71,"timed_out":false,"_sh锾攒揉敫ards":{"total":5,"successful":5,"failed":0},"hits":{"total":2,"max_score":1.0,"hits":[{"_index":"spark","_type":"docs","_id":"AVfhVqPBv9dlWdV2DcbH","_score":1.0,"_source":{"OTP":"Otopeni","SFO":"SanFran"}},{"_index":"spark","_type":"docs","_id":"AVfhVqPOv9dlWdV2DcbI","_score":1.0,"_source":{"one":1,"two":2,"three":3}}]}}