7545 Irvine Center Drive

Irvine, California 92618​ USA

info@datalog.ai / Tel: +1-415-741-5520

© 2019 by Datalog, Inc. / All rights reserved  

Terms of Service Privacy Policy  

  • White LinkedIn Icon
  • White Twitter Icon
Feb 15

Gartner's latest graphic on AI in business.

1 comment

More than a quarter of CIO's are already underway with an AI program.

 

 

New Posts
  • Write your post here. Add photos, videos and more to get your message across.
  • Video from a panel forum I participate on Currnt.com See Executive Summary Below ... Executive Summary (by Frank Kovacs) AI as a technology brings much benefit with the new technology but also many concerns - As we deploy AI we must be extremely cognizant in how we use the technology so that we don't instill and perpetuate bias through the AI implementations - It is also extremely critical that these benefits of AI technology help improve cybersecurity and can thwart those attempting to use the same technology to compromise security for their gain. - Finally it is critical that the AI implementations themselves are done in such a way with appropriate level of security so they don't become compromised and their integrity breached Key Point How AI is being used in Security operations Key Takeaways "Sandboxing. is a technique that allows potentially suspect content to be diverted to a "sandbox", for execution. File attachments, web links, etc. can be tested automatically by security software to ensure they are safe, prior to passing them on to end users." by Sandy Bish"we have been using AI for Security operations. We are using for threat management, Vulnerabilities, Log Analysis, Cloud security" by Anand V"the concept of sandboxing is also the perfect environment for proper training of AI. The engine takes the first pass at determining the level of safety, but the recipient is also provided the ability to override the decision thus teaching the AI engine to better determine right from wrong." by Marsha Williams Key Point How are trust, transparency, and bias being addressed in your AI implementations Key Takeaways "Certain applications of AI are not 100% ready for prime time, but in due time, they will be ready. Natural language, application of "emotion", life-death scenarios must still be reviewed by humans to ensure the recommendations pass appropriateness tests." by Sandy Bish"Amazon had to recently put down its AI-based recruitment application as a result of innate demographical biases. AI's intersection with cybersecurity will become an interesting thing must down the road especially when ethically applied to secure IT systems." by Jacobs Edo"Regulators will need to get more involved if we are to have trust and transparency in AI, regardless of what type of use case. The same way we look for star ratings in travel, food grade levels, LEED levels for green housing, buyer/seller rating on eBay, pharma testing before drugs go on market, etc" by Falguni Desai Key Point Overall security concerns with AI Implementations Key Takeaways "Cybersecurity is a serious concern and it looks like NO one is yet able to fully secure their digital assets. We have seen hackers, often acting on behalf of countries, are able to penetrate almost any corporation or government." by Jay Dwivedi"or all applications, a risk:benefit analysis needs to be completed. Sometimes, with a low-risk impact, the AI solution should be allowed to make the optimal decision. For highest risk scenarios, it should be limited to making a recommendation, with justification, for a human to confirm and use." by Sandy Bish"The biggest use case is AI for connected devices. We have done a blockchain based IOT enabled farm to consumer platform using AI. Have seen some security issues during the integrations especially where the connectivity is not proper in remote villages and provides threat to hacking in wifi devices." by Anand V Key Point Some other Key AI based observations Key Takeaways "The challenge in governance may be the definition of boundaries. Where does "analytics" end and AI begin?" by Jack C Crawford " I don't see enough discussion or clarity regarding how AI will be insured. There have been dangerous incidents involving robots and as they become more autonomous with AI, their ability to make decisions will evolve, but when things go wrong, who will pay?" by Falguni Desai"For autonomous vehicles, testing every permutation isn't feasible; this drives improved simulation testing, which will have payback in other industries. Initially, the driver, who should be verifying vehicle behavior will hold some liability....which may cause many drivers to avoid the technology" by Sandy Bish
Logo for Datalog.ai | a Managed Ventures Company

a Managed Ventures Company