It would be nice to know how this file is structured and all its possible sections, keys, and values. Does anyone know of documentation on the file format?
After restoring a backup the server can't start..
restoring
# tar -izxf /var/www/bak/db/2013-11-10-1437_mysql.tar.gz -C /var/www/bak/db_import
# innobackupex --use-memory=1G --apply-log /var/www/bak/db_import
# service mysql stop
# mv /var/lib/mysql /var/lib/mysql-old
# mkdir /var/lib/mysql
# innobackupex --copy-back /var/www/bak/db_import
# chown -R mysql:mysql /var/lib/mysql
# service mysql start
error log
131110 21:24:20 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
2013-11-10 21:24:21 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2013-11-10 21:24:21 6194 [Warning] Using pre 5.5 semantics to load error messages from /opt/mysql/server-5.6/share/english/.
2013-11-10 21:24:21 6194 [Warning] If this is not intended, refer to thedocumentation for valid usage of --lc-messages-dir and --language parameters.
2013-11-10 21:24:21 6194 [Note] Plugin 'FEDERATED' is disabled.
/usr/local/mysql/bin/mysqld: Table 'mysql.plugin' doesn't exist
2013-11-10 21:24:21 6194 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
2013-11-10 21:24:21 6194 [Note] InnoDB: The InnoDB memory heap is disabled
2013-11-10 21:24:21 6194 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2013-11-10 21:24:21 6194 [Note] InnoDB: Compressed tables use zlib 1.2.3
2013-11-10 21:24:21 6194 [Note] InnoDB: Using Linux native AIO
2013-11-10 21:24:21 6194 [Note] InnoDB: Not using CPU crc32 instructions
2013-11-10 21:24:21 6194 [Note] InnoDB: Initializing buffer pool, size = 128.0M
2013-11-10 21:24:21 6194 [Note] InnoDB: Completed initialization of buffer pool
2013-11-10 21:24:21 6194 [Note] InnoDB: Highest supported file format is Barracuda.
2013-11-10 21:24:22 6194 [Note] InnoDB: 128 rollback segment(s) are active.
2013-11-10 21:24:22 6194 [Note] InnoDB: Waiting for purge to start
2013-11-10 21:24:22 6194 [Note] InnoDB: 5.6.12 started; log sequence number 636992658
2013-11-10 21:24:22 6194 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306
2013-11-10 21:24:22 6194 [Note] - '127.0.0.1' resolves to '127.0.0.1';
2013-11-10 21:24:22 6194 [Note] Server socket created on IP: '127.0.0.1'.
2013-11-10 21:24:22 6194 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.user' doesn't exist
131110 21:24:22 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
mysql_upgrade
/opt/mysql/server-5.6/bin/mysql_upgrade -u root -pxxxxx -P 3308
Warning: Using a password on the command line interface can be insecure.
Looking for 'mysql' as: /opt/mysql/server-5.6/bin/mysql
Looking for 'mysqlcheck' as: /opt/mysql/server-5.6/bin/mysqlcheck
FATAL ERROR: Upgrade failed
I read an article: Determine whether you've already generated SSH keys which says that ssh in Windows, keys are in C:\Documents and Settings\userName\Application Data\SSH\UserKeys\ but I have found the keys to be in C:\Documents and Settings\userName\Application Data.SSH . Is there a setting to determine where to put these keys or am I reading the wrong documentation or what?
I'm struggling with DNS caching issues on a Windows based LAN.
I've noticed that if I change a DNS record on a domain hosted by a 3rd party nameserver, that I always seem to be the very last person to see the change happen. I can often query the domain using a service which checks propagation around the world like www.whatsmydns.net but I usually find that all other DNS servers are correct and it's only my own server which has the old IP - even 8-12 hours later. This is an issue for us as we're website developers and often making changes to DNS records so these huge delays are frustrating.
It seems to be because our primary domain controller server (+Active Directory & DNS) on our LAN (which is also our local DNS server) caches records for AGES (Way beyond it's published TTL). How can I stop the Windows DNS server from caching, or reduce the caching to only an hour or so?
I am trying to set up the mod_status module in Apache2 per thedocumentation here: http://www.serverdensity.com/docs/agent/apachestatus/
My problem is that my default site is over-riding the /server-status location, what is it that I am doing wrong?
Hi
I am trying to find some help on the FishEye documentation to help me add a git repository to it. This is all I can get and I have no idea what to put in the repository location (git://, ssh://, https:// which URL do I put there?)
Can someone please help me out!
Thanks.
Does anyone have an idea how to scan over a home network or have documentation about this? I can't find a good sollution.
It's a Officejet 7310 All in ONE capable of network scanning and printing on a windows platform
Server: neatx-server 0.3.1+svn59-0~ppa1~lucid1
Client: NX Client for Windows 3.4.0-7
Sorry if this is a stupid question, but I googled and couldn't find any documentation on this topic... How can I reconnect to a disconnected NX session? I can see sessions in NX Session Administrator, but there is no way to reconnect to them. The NX Client seems to ignore any existing sessions and create new ones.
I have a CentOS server running WHM that uses FastCGI (mod_fcgid) running PHP 5.2.17 on Apache 2.0 with SuExec. When I start Apache it begins fine and serving requests. If I run ps on the terminal as root I see the php processes and they are owned by their httpd parent processes.
After X amount of time - different from time to time, not much longer than a few hours typically - the server will begin spawning PHP jobs owned by the init process ID (1)
Example of good listing:
12918 18254 /usr/bin/php
12918 18257 /usr/bin/php
12918 18293 /usr/bin/php
12918 18545 /usr/bin/php
12918 18546 /usr/bin/php
12918 19016 /usr/bin/php
12918 19948 /usr/bin/php
Then later something like:
1 6800 /usr/bin/php
1 6801 /usr/bin/php
1 7036 /usr/bin/php
1 8788 /usr/bin/php
1 10488 /usr/bin/php
1 10571 /usr/bin/php
1 10572 /usr/bin/php
The php processes running owned by (1) never get cleaned up. Why would these processes be running? We don't use setsid or anything beyond basic PHP in the code this server is running.
Cheers & Thanks
Karmic only has mumble 1.1.8, but if I want to connect to a 1.2 server I need to upgrade... So I would like to know how I can upgrade to mumble 1.2.2 without messing myself up for later when I upgrade to 10.04 and beyond... I just want a smooth transition into the next versions of mumble.
Is there anyway to upgrade to this newer version and either keep it in the package manager or make it not interfere with the natural upgrades the program will later recieve from the package manager?
Thanks,
Dan
I just finished up configuring a fairly default configuration of Tomcat. My Apache configuration was pre-existing and post-tomcat it still has no issues. I am using mod_jk to (if I am saying this correctly) interface between Apache and Tomcat and have my conf files setup for my workers, etc.
I put my test file (Simply: http://tomcat.apache.org/tomcat-4.1-doc/appdev/sample/web/hello.jsp) into my tomcat/webapps/ directory and then call it via http://localhost/test/hello.jsp. From here Apache returns a "502 Bad Gateway" response.
I confirmed this via the Apache logs, but beyond that I have no idea how to diagnose the issue. I assume the 502 is because Tomcat did not respond. I'd like to confirm if Tomcat received the request, but cannot locate the log file.
At this point I had thought my installation was complete, so not sure where to go from here. Any input would be appreciated.
Looking at the details of a certificate using the following:
openssl x509 -noout -text -purpose -in mycert.pem
I find a bunch of purpose flags (which I've discovered are set by the various extensions attached to a certificate).
One of these purpose flags is "Any Purpose". I can't seem to find ANY documentation on this flag and why or why not it is set.
Do any of you know where I can find more information on this purpose and what it means?
Thanks,
I've read through thedocumentation for nginx's HttpProxyModule, but I can't figure this out:
I want it so that if someone visits, for example http://ss.example.com/1339850978, nginx will proxy them http://dl.dropbox.com/u/xxxxx/screenshots/1339850978.png.
If I was to just use this line in my config file:
proxy_pass http://dl.dropbox.com/u/xxxxx/screenshots/;,
then they would have to append the .png themselves.
tia,
David.
The only way I know how to open the Select Users, Computers, Service Accounts or Groups is by right clicking on a folder and selecting Properties - Security Tab - Edit - Advanced.
Below is a screen shot of the window I want to access:
Is there any other way to view the full list (Find Now)? I am writing documentation and I want the user to check if a user exists before creating it. I would ideally like them to access this via control panel or similar.
I'm trying to find a computer at work that we can't find the physical location of. There's no documentation in the inventory of it but it responds to nslookup, ping, and i can log onto it and edit it's files. However, we have no idea where in the building it is. Anyone have any good ideas for finding it outside of making it beep repetitively and annoying people while i run around looking for it?
I'm looking for a way to track myself and receive quality data upon which I can write future scripts/programs.
For example, I use Google Reader a lot. I'd like to track the hrefs that garner my clicks. Further, I'd like to drop all of the words of each href into a database where they can be stacked in a hierarchical manner. At the end of the week I want to know that "Ubuntu" garnered 448 clicks and "Cheetos" garnered 2. :)
That's just one example... I'd like this tracking and data-collecting to extend beyond my browser.
I know writing something to do this myself wouldn't be too awfully difficult but if something already exists I'd happily use it.
Thanks in advance.
Primary OS: Ubuntu 10.04
I have OCS NG and GLPI set up and working fine idependently of each other on the same host. For a while GLPI was sccuessfully importing computers from OCS NG, but now GLPI shows there are new computers to import but doesn't do anything when requested to import them.
How do I find what is going on? Are there any log files or debug modes I can turn on? Documentation on the interaction of these pieces of software is pretty sparse.
I realize this is more appropriate for our company's admin group to field. However, pretend they are unresponsive just for the sake of discussion =)
If my system is prompting me to reboot now or in a set time (say in 15 mins) is there a way to delay that even further? It is usually just an inconvenient time and I would like to delay beyondthe stated time.
(ex) System Restart Required: A newly installed program requires this computer to be restarted. Please save your work and restart your computer. Your computer will automatically be restarted in: xxmins...
Thanks for any responses in advance!
I've been trying to set up PPTP VPN to connect to microvpn.com, but I can't get it to work. Their site has no documentation for Linux. Can you help me out
Im a new user and can't post images, but here's screenshot
http://img708.imageshack.us/img708/4962/screenshot1yf.png
I am running into issues where the CA bundle that has been bundled with my version of cURL is outdated.
curl: (60) SSL certificate problem, verify that the CA cert is OK. Details:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
More details here: http://curl.haxx.se/docs/sslcerts.html
Reading through thedocumentation didn't help me because I didn't understand what I needed to do or how to do it. I am running RedHat and need to update the CA bundle. What do I need to do to update my CA bundle on RedHat?
Is it possible to rename an archive in Amazon Glacier?
Thedocumentation says:
After you upload an archive, you cannot update its content or its
description. The only way you can update the archive content or its
description is by deleting the archive and uploading another archive.
That would lead me to think that it's not possible, but I'm not sure whether the file name is considered part of the archive description.
Hello,
I'm trying to write a SMF manifest but i'm stuck because i can't find the complete documentation. Their DTD (/usr/share/lib/xml/dtd/service_bundle.dtd.1) is a joke, it's full of CDATA.
For instance, i'm looking for the complete specification for <service_fmri/>, which value attributes are valid, etc.
Where can i find the complete specification for writing Solaris SMF manifests ?
Thanks.
I'm trying to create a table in DokuWiki, with a cell that vertically spans, however unlike the examples in the syntax guide, the cell I want to create has more than one row of text.
The following is an ASCII version of what I'm trying to achieve
+-----------+-----------+
| Heading 1 | Heading 2 |
+-----------+-----------+
| | Multiple |
| Some text | rows of |
| | text |
+-----------+-----------+
I've tried the following syntax
^ Heading 1 ^ Heading 2 ^
| Some text | Multiple |
| ::: | rows of |
| ::: | text |
but this generates the output
+-----------+-----------+
| Heading 1 | Heading 2 |
+-----------+-----------+
| | Multiple |
| +-----------+
| Some text | rows of |
| +-----------+
| | text |
+-----------+-----------+
I can't find anything in the DokuWiki documentation, so I'm hoping I'm missing something fundamentally simple?
Hi
I am trying to find some help on the FishEye documentation to help me add a private git repository to it. This is all I can get
I can setup a public repository using this method but not able to add a private repository. I believe I need to add some credentials to FishEye for it to be able to access the repo. But where do I add these creds?
Can someone please help me out!
Thanks.
I'm giving a hands on presentation in a couple weeks. Part of this demo is for basic mysql trouble shooting including use of the slow query log. I've generated a database and installed our app but its a clean database and therefore difficult to generate enough problems.
I've tried the following to get queries in the slow query log:
Set slow query time to 1 second.
Deleted multiple indexes.
Stressed the system:
stress --cpu 100 --io 100 --vm 2 --vm-bytes 128M --timeout 1m
Scripted some basic webpage calls using wget.
None of this has generated slow queries. Is there another way of artificially stressing the database to generate problems? I don't have enough skills to write a complex Jmeter or other load generator. I'm hoping perhaps for something built into mysql or another linux trick beyond stress.