Tuesday, 8 December 2020

New project: an AI democratic congress

       We have built a market decision-making algorithm in which three AIs are trying to guess whether or not the stock market will go up. 

What if we pack various, but with the same purpose, AIs into a virtual congress and let them determine the best trading decisions? We are working on it. 

Although generating AIs takes a lot of computational resources and time, an estimation process does not deplete those as much as in the process of learning. Therefore, in theory, we concluded that we could assign quite a fair number of AIs to the assembly to have virtual debates. 

Saturday, 7 November 2020

Things I have learnt. I am thinking about turning the lessons into a contribution to society.

  I have been working for a company where we offer solutions using AI technologies. I keep thinking that I am not contributing enough for what I earn. However, what I have done might be more than okay for a first-year AI developer. 

I have no intention of quitting the job, but I am perhaps ready to think about creating my own business. Luckily or not, our Japanese government is willing to lend capital with relatively generous interest rates to business owners. It honestly reminds me of the financial crisis in 2008 with many subprime loans. It might, however, be a chance for me to take since macro risks like that will be taken care of by the government and not individuals like me for now. What is more, I will do the right thing with the capital. 

So I will make plans as CEO of ANEKOSYSI going to prevail soon. We will change the world into a better and more sustainable place.

Friday, 24 July 2020

Come and Join AI field!

Humanity has invented almost everything in the physical world. This sentence sticks inside of my brain every single time I come up with a seemingly new idea which turned out to have already been found out by a person.
  In the AI field, however, there is still space to explore. After working in the AI sector for four months, I realized that there are many things to automate with this brilliant technology. What is more, this field is mainly idea-based.
 Although nobody usually trusts your AI model without mathematical and statistical facts supporting it, you can create one for personal use, thanks to Keras's python library. Besides, mathematics does not matter anymore when your model does a great job. Results are pretty much everything.
 I will restart developing the market prediction AI machine with you. Although you will get a better answer at another place online, any questions are welcome.

Saturday, 11 July 2020

Hello All!

I am pleased to announce that I am going to make my own business called "ANEKOSYSI". Even in Japanese, my native tongue, the name is weird enough. We needed, however, a certain amount of uniqueness in our name first. Then the philosophy is hoped to live in our unique and innovative behaviour and products.

I wanted a business named "Neko System". The name turned out to be too long and not rhythmical enough. What is more, a former insightful entrepreneur used the name already. It turned out to be Okay to me since it sounds like "Analysis", which we might do in our business. Let's make it NekoSys and sandwich it between "AI", which is our primary strength to contribute to society.
It is ANEKOSYSI's first step with its paw. Let's go

Saturday, 29 February 2020

Summary 29th Feburary 2020


Where are we?


We have been working on a project to try to forecast the stock market. The purpose of this project has been clear: To make a profit using deep machine learning on the financial market.  


How can you achieve this?


There are three crucial factors to accomplish this goal. 

  1. The quantity and quality of the training data
  2. The sophistication and fitness of Layers in the Sequential model
  3.  How we operate AIs 


The quantity and quality of the training data


To reach a better outcome, we need to avoid overfitting. There are two ways to fulfil the intention. The first one is to increase the amount of training data. The other is to add a dropout layer to the Sequential model. It is sometimes challenging to increment the volume of training data, however. 


The easiest way is to add various price data to the training data to increase its amount. However, the Japanese stock market information mingled with the American stock market data in the training data might confuse AI during the learning process. Also, I confirmed that it does not make a drastically better result either. 

On the other hand, specializing the attribute of the dataset by narrowing down the types of information in it makes the amount of data not enough to prevent overfitting. Is there a measure to boost the volume of the training dataset while avoiding excessive diversification in it? 


Well, I found one. 


As shown in the figure above, formerly, we just divided the data and did not allow the overlap between one and the other. It, however, turned out that the former method was not the best way. The latter approach lets us extract more training data from the same original data than the former. Note that "Data" abstractly shown by the image above is like, for example, the image data below.



There is another excellent feature of this mechanism. Although to improve the quality of the training dataset, eliminating the static data in terms of the price fluctuation might be an acceptable option, it reduces the volume of the training data. This procedure, however, makes up for this shortcoming by multiplying the quantity of the training dataset. 
Henceforward, the goal is to exploit the maximum potential of the data by increasing the amount of training dataset using the system above while standardizing the data and appropriately removing specific data that is not considered necessary for the learning process. 


The sophistication and fitness of Layers in the Sequential model


Although it is effortful to comprehend, this factor plays a significant role in composing an exceptional AI model. Therefore, it might be imperative to learn Layers in Keras. 


I was utilizing a sequential model from a random place without thinking at first. There are, however, certain limitations with this approach. Accordingly, I decided to learn more about the theory behind it. One essential piece of information from what I learned is the order of Layers in a Sequential model. 

- Convolution

- Relu

- Pooling

- Flatten

- Fully Connected (Dense)

- Softmax


Note that Convolution, Relu and Pooling are iterable. I have no idea why but the iteration like in the Sequential model below often generates a better result. 


Please remember that I am still unsure about the theory behind it and what is happening with this model. Therefore, the Sequential model above might be a complete mess if seen by the professionals. 


How we operate AIs


Since I have not been making efforts on this matter, this part will be a problem to be solved. 

Although I implemented an AI democratic decision-making system to make decisions made by AIs more reliable and accurate, the problem is the timing when to use it and how we reflect the results in buying and selling actions. 


Conveniently or not, the Japanese stock market crashed a few days ago. Let us show those three AIs the latest price chart and see what happens. 



Two-thirds of AIs said it would go up. I wonder What actually will happen next week. 



Tuesday, 4 February 2020

Progressing yet.

I have been working on the so-called “Democratic approach project”. Under the project, I attempt to make a democratic AI system where AIs make decisions based on their voting outcome.
I started to make multiple models or AIs.
First AI I made was accidentally favourable. I call it Liselotte.
A problem, however, arose when I tried to create the second AI called Alma.
Issues that occurred during the process are below.
1.    Overfitting
2.    Outputs of 0 are too frequent (70%~100% of whole predictions it generates)

When I try to fix #1, the overfitting problem, by adding a Normalization Layer such as dropout, problem #2 appears and vice versa.

There are mainly three variables in the equation to solve this dilemma: 1. Amount and intensity of Normalization Layers. 2. Epochs. and 3. The chosen optimizer.

Other factors influence the results besides the training phase: 1. Data arrangement. And 2. Model or layer architecture (slightly overlapping with the three variables above)
Occasionally, I find a better combination of them and get the acceptable result as a model, in other words. I get another AI joining the AI democratic congress.



I have so far made three AIs called Liselotte, Alma and Vanessa. Alma and Vanessa seem much superior to Liselotte. Therefore, I plan to exile Liselotte from the assembly, but it is still under consideration.


Saturday, 18 January 2020

Planning. Again

Okay, we’ve faced numerous problems.
I was planning to make the best AI with many attempts. There, however, might be a better way to enhance the result of the prediction.

The method is easy. First, create several better models with different layers and data. The number of AIs is ideally odd.
Then take a vote amongst the better AIs to decide a final prediction on the market.

A problem or difficulty I can consider is that the chosen AIs should probably be diversified. Therefore, the next plan is to build several numbers of quality AIs with different data.

Market Prediction with Artificial Intelligence. Demonstration.

          We finally managed to actualise what We wanted to make.            We developed A model that can predict the stock market and auto...