
Building AI Strategy
Building the knowledge to confidently engage with AI technologies, strengthen security programs, align with global standards, and drive ethical, secure AI adoption across organizations, does not come automatically.
Building the knowledge to confidently engage with AI technologies, strengthen security programs, align with global standards, and drive ethical, secure AI adoption across organizations, does not come automatically.
My journey with Machine Learning and AI started back in 2017; my first article about AI in this blog was in 2019. And I believe that Governance is of utmost importance in many things, AI being one of them.
Three years after my last appearance in a conference due to COVID-19 lockdowns, I was invited to present to the 9th Information Security Conference in Greece. The conference theme was Enabling a Secure Future: Managing Risks in a Constantly Changing World. The conference was virtual / online and was held on the 17th of February, 2022.
Once upon a time I spent a total of 4 hours (over three days) in meetings, stating that I will definitely not approve a security exception. At least, not until someone demonstrates that the exception requested, removes the root cause or is a valid workaround.
During the last 3 months I got more times than expected in discussions about patch and vulnerability management. I need to say, there is much misunderstanding going around about these two processes; so much that I could argue that several organizations are exposing themselves significantly, just because the touch points and (lack of) dependencies in these two processes are not clear.
I often get into discussions about budgets and how much a company should invest in its security program. There is no easy answer because the problem we are trying to solve has many unknowns.
There are many ways one may address this question, the main one being a rule of thumb.