- Start the Apache Server and MySQL instances from the XAMPP control panel.
- After the server started, open any web browser and give http://localhost:8085/phpmyadmin/ (if you are running XAMPP on 8085 port). This will open the phpMyAdmin interface. Using this interface we can manager the MySQL server from the web browser.
- In the phpMyAdmin window, select SQL tab from the right panel. This will open the SQL tab where we can run the SQL queries.
- Now type the following query in the textarea and click Go
UPDATE mysql.user SET Password=PASSWORD('rootPass') WHERE User='root'; FLUSH PRIVILEGES;
- Now you will see a message saying that the query has been executed successfully.
- If you refresh the page, you will be getting a error message. This is because the phpMyAdmin configuration file is not aware of our newly set root passoword. To do this we have to modify the phpMyAdmin config file.
- Open the file [XAMPP Installation Path] / phpmyadmin / config.inc.php in your favorite text editor.
- Search for the string
$cfg\['Servers'\]\[$i\]['password'] = '';
and change it to like this,$cfg\['Servers'\]\[$i\]['password'] = '
Here the ‘password’ is what we set to therootPass
';root
user using the SQL query. - Now all set to go. Save the config.inc.php file and restart the XAMPP server.
Showing posts with label mysql. Show all posts
Showing posts with label mysql. Show all posts
Wednesday, February 8, 2017
change the root password for MySQL in XAMPP
Monday, February 6, 2017
Designing Multi level, Multi Server, Multi Database System
By
Neeraj Kumar Jha
1:04 PM
distributed architecture, distributed system design. MongoDB, incrontab, multi database, multi database multi server architecture, multi layer architecture, multi server, mysql, windows services
I am providing consultancy in field of system design for quite some time . Recently one of my
On respective client server we will set the cron job. This Cron job executes a php script file which has the client Id value. With the help of client id , server gets the detail of scripts to be executed and the DB server detail. It also gets the other detail if required for the data channel/Source from the central MongoDB.
This architecture can be further optimized by using Job scheduler Queue. By Using Job scheduler the system can handle any exception occurred during data gathering or processing.
acquaintance approach me to help him in designing a system for handling huge data gathering and processing process.
I asked him what he meant by huge data gathering and processing. He explained me about the system He is looking forward to build a system to provide one stop solution for analyzing the data from various channels/sources.
He has large customer base and all customer has multiple channels/sources to provide data. The size of data is varies in the rage of 3 to 5 million rows. Also the number of columns are not fix.
As the number of columns were not fixed for unprocessed data, I asked him, "what about the number of column after processing the data". His answer was it is going to be fixed.
on further discussion about data, We zeroed to MongoDB for storing unprocessed data and MySql for storing processed data. The primary reason behind using MongoDB for storing the unprocessed data were
- The data size was huge
- The number of columns are not fixed i.e. schema of tables were not fixed.
- It is supported by strong developer community.
The processed data is structured so we decided to use MySql. MySql is also free database and well supported by developer community.
After deciding the database we further discussed about the sources/channels of data. We zeroed that it is good strategy to bind the sources and scripts needed for gathering and processing of data. It will be further passed on to the client at the time of registration i.e. at the time of client registration the client will be mapped to sources/channels and it will automatically bind the scripts required for processing and gathering of data for the client. The advantage of using this strategy was
Till now the structure was like
After deciding the database we further discussed about the sources/channels of data. We zeroed that it is good strategy to bind the sources and scripts needed for gathering and processing of data. It will be further passed on to the client at the time of registration i.e. at the time of client registration the client will be mapped to sources/channels and it will automatically bind the scripts required for processing and gathering of data for the client. The advantage of using this strategy was
- The source/channel , script and client can work independently. If required the scripts can be added and deleted at client level.
- The logical change in source/channel script will not effect the current working of client
Till now the structure was like
We discussed much about the data management, now we moved to data processing part. It is the point were a system can make or break.
We started discussing about the advantages and dis-advantages of using a single server for executing scripts for gathering and processing data of all client.
Dis-Advantages
- Risk of mixing client data
- If once process fails then all next process in queue will fail
- As data amount is huge so it may be slow down the server
- client specific customization is difficult
Advantages
- Low cost
- easy to maintain
We come to conclusion that it is not good idea to use single server for gathering/processing of data for all client.
We decide to use independent server for each client. The respective server of client will take care of data gathering and processing.
On respective client server we will set the cron job. This Cron job executes a php script file which has the client Id value. With the help of client id , server gets the detail of scripts to be executed and the DB server detail. It also gets the other detail if required for the data channel/Source from the central MongoDB.
After getting the scripts it puts it into a designated folder with respective values. On the designated folder we will creat "INCRONTAB". As soon as the files are saved into folder the incrontab will come into action. It will start executing the files parlally.
The status of execution process are maintained for the respective client on central MongoDB server with various status FLAG like running, completed , error etc.
The final architecture of system looks like as below image
The above solution will work well on combination of Linux/Unix server. To achieve same on windows server we need to use windows services. windows services provides the same working as we do with CRON.
The status of execution process are maintained for the respective client on central MongoDB server with various status FLAG like running, completed , error etc.
The final architecture of system looks like as below image
The above solution will work well on combination of Linux/Unix server. To achieve same on windows server we need to use windows services. windows services provides the same working as we do with CRON.
The only disadvantage of this architecture is that running cost for client is going to be HIGH.
This architecture can be further optimized by using Job scheduler Queue. By Using Job scheduler the system can handle any exception occurred during data gathering or processing.
Tuesday, December 25, 2007
how to create colon of a existing table
By
Neeraj Kumar Jha
1:24 AM
asp, asp.net, database, db, db2, dotnet, java, javascript, mysql, oracle, php, vb, vb.net
MySql
CREATE TABLE new_table_name like existing_table_name
this will create new table with all the property of old table (like primary key , auto increment)
CREATE TABLE new_table_name select * from old_table
this query will create new table with all data and and structure but property like primary key are drooped
CREATE TABLE new_table_name (ID INT auto_increment primary key ) select * from old_table
this query will create new table with old table structure and with a new column " ID " .
if ID exist in old table then it will assign the property of primary key and auto increment to ID column;
ORACLE
CREATE TABLE newTable AS SELECT * FROM oldTable WHERE 1= 2;
This will create table newTable with the same columns as oldTable.
It will have no constraints or indexes. It will have zero(o) rows in it.
CREATE TABLE newTable AS SELECT * FROM oldTable WHERE 1= 1;
OR
CREATE TABLE newTable AS SELECT * FROM oldTable;
This will create table newTable with the same columns as oldTable and with all data of old table.
SQL 2000
SELECT * INTO emp_new from emp where 1=2
This will create the table structure alone.
SELECT * INTO emp_new from emp where 1=1
This will create the table structure with data.