Logstash 未从 MySQL 读取新条目

Logstash not reading in new entries from MySQL(Logstash 未从 MySQL 读取新条目)
本文介绍了Logstash 未从 MySQL 读取新条目的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在我的 Windows 7 机器上本地安装了 Logstash 和 Elasticsearch.我在 Logstash 中安装了

这里是Mysql数据:

我想做什么(实现):

我希望 Logstash 运行并监听审计表上的新条目,并且只索引该数据(当新的审计条目输入到表中时,Logstash 会知道并将该条目发送到 Elasticsearch.

此外,为什么我运行该命令时 Logstash 会停止,它不应该运行吗?我是 Logstash 和 Elasticsearch 的新手.

谢谢

G

我也在Elastic 论坛,如果我得到答案,我会在这里发帖帮助他人.

解决方案

默认情况下,logstash-input-jdbc 插件会运行你的 SELECT 语句一次然后退出.您可以通过添加 来更改此行为schedule 参数 带有 cron 表达式到您的配置中,如下所示:

输入{数据库{jdbc_driver_library =>C:/logstash/lib/mysql-connector-java-5.1.37-bin.jar"jdbc_driver_class =>com.mysql.jdbc.Driver"jdbc_connection_string =>jdbc:mysql://127.0.0.1:3306/test"jdbc_user =>根"jdbc_password =>"声明 =>SELECT * FROM transport.audit"时间表=>"* * * * *" <----- 添加这一行jdbc_paging_enabled =>真的"jdbc_page_size =>50000"}}

结果是 SELECT 语句现在每分钟运行一次.

如果您的 MySQL 表中有一个日期字段(但似乎并非如此),您还可以使用预定义的 sql_last_start 参数,以免重新索引所有记录每次运行.该参数可以在您的查询中使用,如下所示:

 语句 =>SELECT * FROM transport.audit WHERE your_date_field >= :sql_last_start"

I have Logstash and Elasticsearch installed locally on my Windows 7 machine. I installed logstash-input-jdbc in Logstash.

I have data in MySql database which I send to Elasticsearch using Logstash so I can do some report generating.

Logstash config file that does this.

input {
 jdbc {
   jdbc_driver_library => "C:/logstash/lib/mysql-connector-java-5.1.37-bin.jar"
   jdbc_driver_class => "com.mysql.jdbc.Driver"
   jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/test"
   jdbc_user => "root"
   jdbc_password => ""
   statement => "SELECT * FROM transport.audit"
   jdbc_paging_enabled => "true"
   jdbc_page_size => "50000"
}
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "transport-audit-%{+YYYY.mm.dd}"
}
}

this works and Logstash sends the data to Elasticsearch when I run :

binlogstash agent -f logstashconf1_input.conf

this is the response from that command

io/console not supported; tty will not be manipulated
Default settings used: Filter workers: 4
Logstash startup completed
Logstash shutdown completed

WHY, does Logstash shutdown?

when I check Elasticsearch the data is there, and if I run the command again the data is re-indexed (duplicated).

Here is the Mysql data:

What I am trying to do (achieve):

I want Logstash to run and listen for new entries on audit table and only index that data (when a new audit entry is entered into the table Logstash would know and send that entry to Elasticsearch.

Also why does Logstash stop when I run that command, should it not be running? I am new to Logstash and Elasticsearch.

Thanks

G

I have also posted the same question in Elastic forum, and if I get the answer I will post here to help others.

解决方案

By default, the logstash-input-jdbc plugin will run your SELECT statement once and then quit. You can change this behavior by adding a schedule parameter with a cron expression to your configuration, like this:

input {
 jdbc {
   jdbc_driver_library => "C:/logstash/lib/mysql-connector-java-5.1.37-bin.jar"
   jdbc_driver_class => "com.mysql.jdbc.Driver"
   jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/test"
   jdbc_user => "root"
   jdbc_password => ""
   statement => "SELECT * FROM transport.audit"
   schedule => "* * * * *"               <----- add this line
   jdbc_paging_enabled => "true"
   jdbc_page_size => "50000"
 }
}

The result is that the SELECT statement will now run every minute.

If you had a date field in your MySQL table (but it doesn't seem the case), you could also use the pre-defined sql_last_start parameter in order to not re-index all records on every run. That parameter can be used in your query like this:

   statement => "SELECT * FROM transport.audit WHERE your_date_field >= :sql_last_start"

这篇关于Logstash 未从 MySQL 读取新条目的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本站部分内容来源互联网,如果有图片或者内容侵犯您的权益请联系我们删除!

相关文档推荐

Hibernate reactive No Vert.x context active in aws rds(AWS RDS中的休眠反应性非Vert.x上下文处于活动状态)
Bulk insert with mysql2 and NodeJs throws 500(使用mysql2和NodeJS的大容量插入抛出500)
Flask + PyMySQL giving error no attribute #39;settimeout#39;(FlASK+PyMySQL给出错误,没有属性#39;setTimeout#39;)
auto_increment column for a group of rows?(一组行的AUTO_INCREMENT列?)
Sort by ID DESC(按ID代码排序)
SQL/MySQL: split a quantity value into multiple rows by date(SQL/MySQL:按日期将数量值拆分为多行)