PureConnect

 View Only

Discussion Thread View
  • 1.  Logskim Would Be Great But

    Posted 10-06-2016 18:53
    Logskim is a pretty cool way of getting CIC server logs into something quick and easy to view and to filter on. We already use it along with Kibana for our internal IceLib tracing. However, I downloaded Logskim and created several ElasticSearch instances and always come up with an eror. I'm posting my CMD line and the error message I get in hopes that someone can spot why Logskim can't see my Elasticsearch node logskim -es http://logsearch.internal_server:9200 http.compression: true -dir \\server_name\d$\I3\IC\Logs -level 60 2016/10/06 13:30:50 Sink:0xc04206d400 - Writing to name:C:\Users\user_name\AppData\Local\Temp\inin_tracing\2016-10-06\l ogskim_core.ininlog Error getting an ElasticSearch client:no Elasticsearch node available We have an internal server running ElasticSearch and I also installed a local instance of ElasticSearch on my computer and ran Logskim against both and get the same result each time. I must have a something off in the CMD line. Here is the output against my local instance: { "status" : 200, "name" : "elasticsearch", "cluster_name" : "logskim", "version" : { "number" : "1.5.0", "build_hash" : "544816042d40151d3ce4ba4f95399d7860dc2e92", "build_timestamp" : "2015-03-23T14:30:58Z", "build_snapshot" : false, "lucene_version" : "4.10.4" }, "tagline" : "You Know, for Search" } Thanks Guy


  • 2.  RE: Logskim Would Be Great But

    Posted 02-23-2017 19:54
    Did you ever figure out what was wrong?


  • 3.  RE: Logskim Would Be Great But

    Posted 02-23-2017 20:32
    Drew, I could not find the logstash instance. More than likely, my URL was bad. I did recently retrieve our production logstash instance URL and will test again using that against logskim. Thanks for proving a tool like this. I will keep working this as Kibana/Logstash is pretty cool to quickly spot issues with its search capabilities. I will update this thread with my new URL. thanks! Guy


  • 4.  RE: Logskim Would Be Great But

    Posted 03-13-2017 15:03
    Still have issues with Logskim finding the node with my command line. Did my best to interpret the readme file with the right parameters. thanks! E:\logskim>logskim -es http://logging.servername.us/api/v1/logging/log a-Key="CIC" a-Env="Dev" a-App="Server" http.compression: true -hostname: vm-cic4-dev01 Error getting an ElasticSearch client:no Elasticsearch node available thanks Guy


  • 5.  RE: Logskim Would Be Great But

    Posted 06-09-2017 19:44
    I'm having the same problem. Created a brand new elasticsearch instance on AWS, and can access it via the web. When I run it, I get the same error: Error getting an ElasticSearch client:no Elasticsearch node available Anyone ever get it to work with ElasticSearch?


  • 6.  RE: Logskim Would Be Great But

    Posted 06-09-2017 19:50
    Yep, sigh....:(


  • 7.  RE: Logskim Would Be Great But

    Posted 01-04-2018 16:00
    How about LogSnip, built into IC... Some data I pulled from an internal site: You might look at LogSnip to pull data from the log based on a filter (which you can create in LogViewer and save, then use with LogSnip). You can save to JSON or .ininlog format. command-line help: Microsoft Windows [Version 6.3.9600] (c) 2013 Microsoft Corporation. All rights reserved. C:\Users\icadmin>logsnip You must specify at least one file to read (--log), and a destination for the snip (--out). Allowed options: -h [ --help ] print usage message --from arg Snip log(s) starting at specified time. Format: hh:mm:ss.mmm or YYYY-MM-dd@hh:mm:ss.mmm (use trailing Z for UTC) --to arg Snip log(s) ending at specified time. Format: hh:mm:ss.mmm or YYYY-MM-dd@hh:mm:ss.mmm (use trailing Z for UTC) --out arg File to create from merged logs - format defaults to ininlog --format arg (=ininlog) Format to use for output file: ininlog | json. Defaults to ininlog --filter arg Name of filter to use when snipping log file(s). --filter_file arg Name of xml file to load filters from. The inin_log_filters.xml file also used by ininlogviewer is used by default. --filter_list List the named filters within the default filter file, or filter file specified on the command line. --detailed_errors If set, display the full (nested) error information. --compress_data If set, tries to convert ALL data strings into single-write instances [EXPERIMENTAL!]. --log arg Log file(s) to read. Example stuff from an internal description I dug up: logsnip-w64r-5-0.exe --filter_file %UserProfile%\Documents\dsm-ininlogviewer-filters.xml --filter PolicyActionsJobDoWorkFailed --log *.ininlog --out PolicyActionsJobDoWorkFailed.json --format json The beginning of the outputted json looks like this: [ { "timestamp":"00:00:05.004_0019", "type":"E", "calllevel":0, "thread":13036, "topic":"Recorder Processing", "level":11, "file":"$Id: //eic/main_team_dp1530/products/eic/src/recorder/irserver/Action.cpp#2 $", "function":"PolicyActionsJob::do_work()", "line":54, "message":"Action failed", "attributes": { "IR_RecordingID":"e5daee13-9558-d071-88c0-ab29ba9d0001" } }, { "timestamp":"00:00:05.005_0018", "type":"E", "calllevel":0, "thread":7844, "topic":"Recorder Processing", "level":11, "file":"$Id: //eic/main_team_dp1530/products/eic/src/recorder/irserver/Action.cpp#2 $", "function":"PolicyActionsJob::do_work()", "line":54, "message":"Action failed", "attributes": { "IR_RecordingID":"e5daee13-1c22-d05e-88c0-ab29ba9d0001" } }, ... ] Now that you've got the traces you want in json form, use a scripting language to extract what you want from the json. For example, in Python you could write a script like filter-json-recordingid-context-attribute.py (reproduced below) to extract just the RecordingID context attribute from each entry in the json file: filter-json-recordingid-context-attribute.py # filter-json-recordingid-context-attribute.py # Usage: python filter-json-recordingid-context-attribute.py <json-input-file> # It is recommended to redirect output to a file. import sys import json jsonFile = sys.argv[1] j = json.loads(open(jsonFile).read()) for i in j: print i["attributes"]["IR_RecordingID"] Running this: python filter-json-recordingid-context-attribute.py PolicyActionsJobDoWorkFailed.json > failed-recordings.out ...gets you a file of just the recordingIds... Click here to expand... e5daee13-9558-d071-88c0-ab29ba9d0001 e5daee13-1c22-d05e-88c0-ab29ba9d0001 e5daee13-5d77-d07b-88c0-ab29ba9d0001 e5daee13-485c-d046-88c0-ab29ba9d0001 e4daee13-3568-d033-88c0-ab29ba9d0001 e7daee13-9954-d053-88c0-ab29ba9d0001 e6daee13-50de-d0fc-88c0-ab29ba9d0001 e7daee13-6bea-d085-88c0-ab29ba9d0001 e7daee13-76a9-d0ea-88c0-ab29ba9d0001 eadaee13-1e1b-d0ea-88c0-ab29ba9d0001 ... If you wanted something other than the RecordingID context attribute, you could do for i in j: print i["message"] or...stuff like that.


  • 8.  RE: Logskim Would Be Great But

    Posted 01-04-2018 19:05
    Thanks George! I will try this. Guy


Need Help finding something?

Check out the Genesys Knowledge Network - your all-in-one access point for Genesys resources