Search Results

Search found 12919 results on 517 pages for 'tool pack'.

Page 433/517 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • Lucid Lynx login issue

    - by Bart Silverstrim
    Recently upgraded from Karmic to Lynx. Upgrade seemed to go well, no noticeable issues. I logged in, and my window manager wasn't starting. An application would appear, but sans control buttons and border, so figured the windows manager needed to be given a swift kick. Opened a web browser and a quick google had me run "metacity --replace &" and everything popped up. I re-ran the Compiz configuration tool to enable my rotating desktop cube to the way I liked it, and had to reconfigure my desktop switcher to the right number of desktops (although the first time I ran it, it crashed on the panel and reloaded...odd, but once it relaunched it seemed fine.) Today I installed updates, rebooted and logged in for the second time since my upgrade. Again, the window manager was dead, and my compiz settings were gone, and the workspaces were set back to four (and when I clicked on the preferences to change them, it crashed on the panel and reloaded again). Resetting everything made things look somewhat normal again. I'm guessing it'll work until I reboot again. Googling around isn't turning up similar complaints about Lucid Lynx and the window manager. Before I go deleting preference files, anyone else know of this kind of issue and what could be done about it? Or should I start taking the stab in the dark approach of deleting preference files hoping one of them is corrupt or has something unsupported in it that's throwing LL for a loop?

    Read the article

  • Looking for an application to record audio and video on a linux "embedded" device

    - by Luke404
    I am working with a linux x86 device with limited CPU resources (as a prototype we just use a pentium-m netbook). We'd like to record video from one V4L2 device (we'll probably end up using just USB Video Class devices like all modern webcams) and one audio stream from an ALSA source. The thing will not have screen and keyboard, and obviously no X11 environment. Goals are: do as little work as possible to cope with little cpu resources - for example I'd like to record video in the native MJPEG I get out of the UVC devices encoding audio to MPEG3 Layer-2 (aka mp2) is ok since it let us save a lot of space (compared to raw pcm samples) and does use little cpu power I don't mind loosing some video frames here and there (UVC devices do that) as long as I can get audio and video streams syncronized not require user input to start the thing (a python script takes care of initialization, startup, shutdown, etc...) be able to open the resulting files for postprocessing without too much effort (ie, if mplayer or vlc can play it, it's fine) So far the only app I found that could be started from command line and record V4L2 video + ALSA audio is mencoder but I'm having some difficulties with it. It should be able to do that but I cannot record audio and video together - just one of the two. And if I use two different processes to record to two different files I have no means to get them in sync (audio is more or less always correct, but video framerate will vary over time and it seems to lack timestamps to correctly play it back to the correct time). Long story short, how do you record an unconverted MJPEG stream (from an UVC device) and an audio stream (from an ALSA device, possibly encoding to any standard format) using a command line tool, to a single file (MPEG or any other container), keeping audio and video in sync?

    Read the article

  • How do I Install Intermediate Certificates (in AWS)?

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on Amazon Load Balancer. However, when I check the SSL with site test tool, I get the following error: Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. I converted crt file to pem using these commands from this tutorial: openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM During setup of Amazon Load Balancer, the only option I left out was certificate chain. (pem encoded) However, this was optional. Could this be cause of my issue? And if so; How do I create certificate chain? UPDATE If you make request to VeriSign they will give you a certificate chain. This chain includes public crt, intermediate crt and root crt. Make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your Amazon Load Balancer. If you are making HTTPS requests from an Android app, then above instruction may not work for older Android OS such as 2.1 and 2.2. To make it work on older Android OS: go here click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server" copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link. If you are using geo trust certificates then the solution is much the same for Android devices, however, you need to copy and paste their intermediate certs for Android.

    Read the article

  • Can I recover a rm -rf-ed Mercurial repository?

    - by WishCow
    I made the mistake of wiping out my entire project directory with a quick "rm -rf project". Of course, the .hg directory went with it. I had about 15-20 changesets, that I have not pushed to anyone, and I would really really like to get those back. The system is a Ubuntu machine, and the partiton where the delete happened is ext3, the project consist mostly of PHP files. I know about the guideline to not write to the disk in question. The first idea was to use the tool named scalpel, to get the PHP files back and diff them with the current version from the repo, and somehow carve the changes out. While it succeeded, it did not recover the file names (or there is a switch I'm missing), so I'm left with a few thousand sequentially named .php files, combing through them is not an option. Can a kind soul please save me, and suggest a way to: a) get the repo back, or b) get the files back, with filenames For those wondering how I did such a stupid thing: I was working on a file in Vim which I wanted to remove from the repository: :!hg rm % This complained that the file is in a subrepository, so I specified the following: :!hg rm % -R engine which complained that file has modifications, use -f to force. And this is when somehow, I made up the following command: :!rm -rf % -R engine Somehow, seeing "force" makes me do a rm -rf by reflex.

    Read the article

  • Measure Total Bandwidth for Billing

    - by TonyZ
    I am setting up a new network which customers will host their applications on. It needs to be able to scale out to a few hundred servers and each server will have several VMs on it. Right now in my test environment, after the telco router, we are using a Linux router/firewall which is then connected to a Layer 2 switch. Could be a layer 3 in the future. I need to track total bandwidth per VM for each machine, and I need to do it in a way that it is not part of the VM. Each VM will have a private class ip address which is Natted by the gateway, or we may eventually run more than firewall/reverse proxy off a layer 3 switch. So my thinking is that I can do it off of a promiscuous port on the switches, or at the gateway firewall. I would like to have an out of the box solution, preferably open source. Does anyone have suggestions on the easiest way to set this up, and the easiest tool to use. I have looked at the web sites for Nagios, Zenoss, Zabbix, ntops on the firewall, etc. It is hard to ascertain just from the web sites if they do exactly this or not. Obviously, performance is also somewhat key here. Anything running on the gateway should not drag it down doing traffic accounting. Thanks for any thoughts. Tony Zakula

    Read the article

  • NIC are not advertising Correct Speeds

    - by Squidly
    I have an IBM x336 that is not advertising the proper LINK speeds. One interface is the other is not. I've tried to Force it to 1000/Full but then it just shows link down. I have confirmed the switch is set to auto negotiate like my switches. I have also changed out my Ethernet Cables. I'm at a loss where to look further. I have verified that it will connect at 1G on a different swtich. This also has happened on two different servers on the same switch. This is my output from mii-tool -v for each interface. eth0: negotiated 100baseTx-FD, link ok product info: vendor 00:08:18, model 24 rev 0 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 1000baseT-HD 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 1000baseT-HD 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD eth1: negotiated 1000baseT-FD flow-control, link ok product info: vendor 00:08:18, model 24 rev 0 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 1000baseT-HD 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 1000baseT-HD 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD

    Read the article

  • How to crop Screen Recordings under Snow Leopard?

    - by willc2
    Quicktime Player for Snow Leopard now allows you record the screen. Awesome! Once you have a movie it will let you trim screen recordings for length. Is there a way to crop the movie's dimensions, either in QT or using some built-in or free software? Update: How to crop to an arbitrary size and aspect ratio? iMovie only seems to let you crop to the aspect ratio of the containing project. Result: Both good answers but since I have QuickTime Player 7 and Photoshop, that's the workflow I choose as the answer. NOTE: If you have Photoshop Extended, you can import a movie, use the crop tool, and Export the cropped movie. Not free or built-in, but convenient. To summarize the instructions from the video link ricbax posted: Open movie in QuickTime Player 7 Copy a frame and Paste it into a new document in Photoshop Draw a rectangular selection around the area to keep and fill with black Invert the selection and fill with white Save as .GIF, with 2 colors Back in QuickTime Player 7, open Movie Properties window Select the Video Track Select the Visual Settings Tab Drag and Drop the 2-color .GIF file onto the Mask drop area (or use choose file button) Export the (now cropped) movie DONE

    Read the article

  • Concatenating ogg video files from the command line

    - by Noufal Ibrahim
    Okay. I've got a few ogg files I've created using a desktop recording tool. I've transcoded them using ffmpeg once (mainly to clip out the beginnings and the ends). Now, I have 3 such files which I want to concatenate into a single .ogv file. I tried using oggCat, it crashed with some kind of error (I tried concatenating a file to itself using oggCat and that failed too leading me to believe that my distro is shipping a broken version of the package). Simply cating the files works but I can't seek which is not cool. mencoder run like this mencoder -ovc lavc -oac lavc file1.ogv file2.ogv file3.ogv -o complete.ogv. It transcodes the files into an avi and clips off a little of the 3 videos. So, how do I do this? Update 1: My current workaround is to transcode the 3 files into .mpg using ffmpeg, then cating them together and then transcoding them back into ogv. Update 2: PiTiVi works for this kind of thing but I need something from the command line that I can automate and script.

    Read the article

  • Diagnosing a BSOD involving USB

    - by David Ebbo
    [Running Win7 Ultimate 64 bit] My new HP Pavilion Elite HPE-450t has been plagued by BSDO crashes since I got it about 5 weeks ago. The crashes are somewhat rare, sometimes not occurring for 3 or 4 days. I have spent a lot of time trying to isolate the device that could be at fault, but I have seen crashes with only the keyboard and mouse plugged in (as USB devices), and I tried two sets of keyboard/mouse, so I'm running out of ideas. :( The WhoCrashed tool gave this info about my latest BSOD: crash dump file: C:\Windows\Minidump\121310-11887-01.dmp This was probably caused by the following module: usbport.sys (USBPORT+0x2DE4E) Bugcheck code: 0xFE (0x5, 0xFFFFFA8008F571A0, 0x80863B34, 0xFFFFFA80092F2510) Error: BUGCODE_USB_DRIVER file path: C:\Windows\system32\drivers\usbport.sys product: Microsoft® Windows® Operating System company: Microsoft Corporation description: USB 1.1 & 2.0 Port Driver Bug check description: This indicates that an error has occurred in a Universal Serial Bus (USB) driver. The crash took place in a standard Microsoft module. Your system configuration may be incorrect. Possibly this problem is caused by another driver on your system which cannot be identified at this time. I looked at http://msdn.microsoft.com/en-us/library/ff560407(VS.85).aspx, and for Parameter1 = 0x5, it says "A hardware failure has occurred due to a bad physical address found in a hardware data structure. This is not due to a driver bug". Should I conclude that it's a hardware issue in the machine itself, rather than a bad USB driver or USB device? Here is the MiniDump, in case someone can get more info out of it: http://ewt52q.blu.livefilestore.com/y1peS4Ce8nSK1SXghzMDoxDWXlaEu-EKCJsv25y8y5DXXIUzZ9U0_tYgFJXd939fykwa0zRmx98IW0PYG18GioqKAuARYjtspSA/121310-11887-01.dmp?download&psid=2

    Read the article

  • Windows 7 won't boot from any bootloader except for Windows Boot Manager after partition resize

    - by user2468327
    I have a triple boot system on a single SSD: OSX, Windows 7, and Ubuntu. I use Chimera (basically another version of Chameleon) as my bootloader. Usually I can boot all 3 OSs without any issue, but after using GParted to make my Ubuntu partition 2 Gigs larger, Windows 7 throws me an error when trying to boot to it from either Chimera or Grub. The error is consistently: `0xc000000e can't find \Boot\BCD" (slightly paraphrased). However, I can still get into Windows by selecting Windows Boot Manager from the boot options in my BIOS. I've already tried several known fixes for similar issues, including bootrec /rebuildbcd (and variations), and BootRec.exe/fixMBR + BootRec.exe/fixBoot. I've also tried Chkdsk. At best this has made it so Windows 7 boots on its own by default (making me have to reinstall Chimera and change back my boot settings in the BIOS). At worst this made it so Windows won't boot period. Now I'm back full circle where I started. A detail that might be useful is that bootrec /rebuildbcd says that the number of found Windows installations is 0. I'm fairly certain that I don't have a hybrid MBR. Mainly because I have a UEFI BIOS, and with that, it appears each OS can support a GPT. So it would kind of pointless to have and deal with. I may be wrong though, I couldn't find any way of finding out for sure online. However, I know for sure that the version of Windows I have installed is the UEFI version, as well as every partition tool I've used to look at my boot drive tells me it's GPT. How do I get it back so I can boot Windows 7 through another bootloader so I don't have to manually select it in the BIOS? Preferably without a reinstall.

    Read the article

  • Installing Joomla on Windows Server 2008 with IIS 7.0

    - by Greg Zwaagstra
    Hi, I have been spending the past while trying to install Joomla on a server running Windows Server 2008. I have successfully installed PHP (using Microsoft's web tool for installing PHP with IIS) and MySQL and am now trying to run the browser-based installation. Everything comes up green, I fill in the appropriate information regarding the site name, MySQL information, etc. and no errors are thrown. However, when I get to the step that asks me to remove the installation directory, I am unable to do so as Windows states it is in use by another program (I cannot fathom how this is true). Also, there is no configuration.php file that is created so if I were to manage to delete this folder I have a feeling that there would be problems. I was thinking there was some kind of a permissions issue and have set the permissions for IIS_IUSRS to have read, write, and execute permissions for the entire folder that Joomla resides in but this has not helped. Any help in this matter is greatly appreciated. ;) Greg EDIT: I decided to try and manually install Joomla by manually editing the configuration.php file. This has worked great and now I am certain there is some kind of a permissions issue going on because I am able to do everything that involves the MySQL database (create an article, edit menu items, etc.), but anything that involves making changes to Joomla installation's directory does not work (install plugins, edit configuration settings using the Global Configuration menu within Joomla, etc.) I have granted IIS_IUSRS every permission except Full Control (reading on the Joomla! forums shows that this should be enough for everything to work). This is confusing to me and I am quite stuck on this problem. EDIT 2: The bizarre thing is that in the System Info under Directory Permissions, everything turns up as Writable but then whenever I try to actually use Joomla to, for example, edit the configuration.php file using the interface, it says it is unable to edit the file.

    Read the article

  • Compressing and copying large files on Windows Server?

    - by Aaron
    I've been having a hard time copying large database backups from the database server to a test box at another site. I'm open to any ideas that would help me get this database moved without having to resort to a USB hard drive and the mail. The database server is running Windows Server 2003 R2 Enterprise, 16 GB of RAM and two quad-core 3.0 GHz Xeon X5450s. Files are SQL Server 2005 backup files between 100 GB and 250 GB. The pipe is not the fastest and SQL Server backup files typically compress down to 10-40% of the original, so it made sense to me to compress the files first. I've tried a number of methods, including: gzip 1.2.4 (UnxUtils) and 1.3.12 (GnuWin) bzip2 1.0.1 (UnxUtils) and 1.0.5 (Cygwin) WinRAR 3.90 7-Zip 4.65 (7za.exe) I've attempted to use WinRAR and 7-Zip options for splitting into multiple segments. 7za.exe has worked well for me for database backups on another server, which has ~50 GB backups. I've also tried splitting the .BAK file first with various utilities and compressing the resulting segments. No joy with that approach either- no matter the tool I've tried, it ends up butting against the size of the file. Especially frustrating is that I've transferred files of similar size on Unix boxes without problems using rsync+ssh. Installing an SSH server is not an option for the situation I'm in, unfortunately. For example, this is how 7-Zip dies: H:\dbatmp>7za.exe a -t7z -v250m -mx3 h:\dbatmp\zip\db-20100419_1228.7z h:\dbatmp\db-20100419_1228.bak 7-Zip (A) 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Scanning Creating archive h:\dbatmp\zip\db-20100419_1228.7z Compressing db-20100419_1228.bak System error: Unspecified error

    Read the article

  • prevalent, recurring hardrive failure intel macbook from 2006/2007

    - by SimonSalman
    Hi, Long story: My MacBook's hard drive failed one year ago, just a month after its warranty ended or: a year and a month after I bought it. After about ten phone calls to Apple's service, they agreed to extend the warranty for another year, so that I got it replaced free of charge. In the mean time, I got to know that many MacBook users that experience/report hard drive failures. Every reported crash was preceded by a slowdown of system performance, an increased occurrence of the spinning beach ball wait courser, and frequent crashes of applications that used to run very robust until then. It happened (as far as I know) with MacBooks from 2006/2007. All these MacBooks additionally suffer from a recurring wearing down of their "top case". Many heavy users had to replace their HDDs three time since 2006/2007 resulting in an head crash, making it impossible to recover data (diagnosis of recovery specialists) in most but not all cases HDD was Seagate (doesn't necessarily have to be the cause, if majority of the MacBook charge contained Seagate drives) And right now (one year after my first disk crash), these symptoms are cumulating on my system, again ... Short version: prevalent hard drive failure on MacBook charge from 2006/2007 (i.e. 2.16 GHz Intel Core 2 Due) I am looking for any (preferable open source) tool for checking the hard drive condition, especially to detect the known "MacBook problem". So, that I can replace the disk on time. If any Mac user found a solution to prevent the repeated failure of heir hard drive, I would be very glad to get to known it. I really enjoy my old MacBook, but I hate to get interrupted every year by an HDD crash. BTW, the issue is already in discussion for a long time, but there seems to be no solution, so far. Thanks, Simon

    Read the article

  • How can I unregister a service with dns-sd?

    - by Roman
    I am trying to use "dns-sd" command line tool on my Windows 7 machine. I can already do something. For example I can register a service using "dns-sd -R ...". I also can browser (see) registered services using "dns-sd -B ...". What I still miss, is how to unregister a service. At the moment when I type "dns-sd -R ..." the dns-sd does not return me to the command prompt. To return to the command prompt I need to press Ctrl-C. And the service stays registered till I press Ctrl-C. What I want is to run "dns-sd -R ..." in the background regime and then I would like to have a possibility to unregister a service from the command line. One more thing which I do not understand yet is what "to look up a service" means. In my picture it should be sufficient to register a service, to see it and then to unregister it. But apparently I need to look up a service. What does it mean and why I need to do it?

    Read the article

  • XP SP2 Event log not logging events

    - by Weedfreer
    I have a problem whereby a terminal appears not to be logging events correctly and occasionally appears to have problems communicating accross the network.The terminal has previously been infected with a virus which apears to have 'played' with the default group policy in the standard user profile. Although, outwardly, the terminal appears to be working normally I still have a nagging feeling that it isn't quite back to the way it was. It was infected by a user plugging in a USB Stick while the company was using the older version of the AV software...typically a week or so before it was updated.I have configured the Event logs to Overwrite as required and to be 5056KB in Maximum size. I have also attempted:- Disabling the Event Log service & restarting Renewing the EVT files in Windows\system32\config directory Restarting the event log service and restarting Clearing the event log in the Services MMC Resetting the Filters to Default in the services MMC Using the EVENTCREATE command remotely from a CMD window on the server to force an event creation event. So far the only operation to have any sort of success is the remote computer EVENTCREATE command from a CMD window on the server. As it stands, the only other time that the computer has managed to create events is while it is being restarted.Has anyone gotany ideas on how to proceed? I'm thinking that possibly a refresh of the 'Windows\system32\config\SystemProfile' folder. I'm also thinking about running a tool such as Malwarebytes but this could be slightly controvertial as the system needs to be running on 'up-time' for as long as possible. I'm also wonderign whether anyone knows of any Windows admin tools that allow me to control the event logging options or default security options so that i could get it back to some sort of standard.What I'm trying to avoid is a complte re-imaging of the terminal. Although this is an option, I dont really want to have to take it if i dont need to.Many thanks in advance for any suggestions anyone may be able to provide.

    Read the article

  • network will not work properly after having run TCP optimizer, but safe mode settings work perfectly. how to restore?

    - by michele
    I was experiencing some issues with my connection while playing online and I tried to optimize it by running TCP optimizer on my PC (Windows 7 64bit professional). I thought maybe the situation could improve. but it didn't. actually, I now get an extremely slow page loading time, probably due to a very low RWIN value of 1024. I understand that Windows 7 has a system to automatically adjust the RWIN value when needed. The setting from netsh is "normal" so I guess something else must be wrong. I tried every automatic tool out there to restore Windows' default values, but I had no success. I currently have what should be labeled as "default values" for everything TCP Optimizer initially changed, but the problem persists. The thing is, I just found out that running Windows in safe mode SOLVES the problem completely. The problem is that as soon as I reboot, I get the same issue all over again. So my question is: is there a way to use SAFE MODE network settings in NORMAL mode?

    Read the article

  • Coming from Win XP to 7 and having new accessibility software problems

    - by Anonymous Jones
    I just switched from Windows XP Pro SP3 (32bit) to Windows 7 Ultimate (32bit) on a new PC. Now, both the new onscreen keyboard and a utility for sending mouse clicks are being problematic. The problem with 7's OSK is that some things I type only work intermittently or just dodgily. Like Alt+Tab with multiple Tabs, other Alt/Ctrl/Shift/Win key combinations, and the context menu key. Sometimes apps will not take focus for input at all. I use the OSK it in 'hover' mode, on 0,5 seconds. The clicking tool is Point-N-Click, which sends clicks when I dwell anywhere for 1.25 seconds with the mouse pointer. http://www.polital.com/pnc/ The problem with it is that sometimes it fails to click. Most often this happens in some of the control panel sections, on the taskbar, and when UAC pops up. It seems to occur in conjunction with OSK usage a bit too, I think. I'm using an Administrator account. DEP and UAC settings are default. What can I do to fix or work around either of these problems? I'm disabled so this really is killing usability.

    Read the article

  • How to pass alias through sudo

    - by Tanktalus
    I have an alias that passes in some parameters to a tool that I use often. Sometimes I run as myself, sometimes under sudo. Unfortunately, of course, sudo doesn't recognise the alias. Does anyone have a hint on how to pass the alias through? In this case, I have a bunch of options for perl when I'm debugging: alias pd='perl -Ilib -I/home/myuser/lib -d' Sometimes, I have to debug my tools as root, so, instead of running: pd ./mytool --some params I need to run it under sudo. I've tried many ways: sudo eval $(alias pd)\; pd ./mytool --some params sudo $(alias pd)\; pd ./mytool --some params sudo bash -c "$(alias pd)\; pd ./mytool --some params" sudo bash -c "$(alias pd); pd ./mytool --some params" sudo bash -c eval\ "$(alias pd)\; pd ./mytool --some params" sudo bash -c eval\ "'$(alias pd)\; pd ./mytool --some params'" I was hoping for a nice, concise way to ensure that my current pd alias was fully used (in case I need to tweak it later), though some of my attempts weren't concise at all. My last resort is to put it into a shell script and put that somewhere that sudo will be able to find. But aliases are soooo handy sometimes, so it is a last resort.

    Read the article

  • What is the best way to recover from a mysql replication fail?

    - by Itai Ganot
    Today, the replication between our master mysql db server and the two replication servers dropped. I have a procedure here which was written a long time ago and i'm not sure it's the fastest method to recover for this issue. I'd like to share with you the procedure and I'd appreciate if you could give your thoughts about it and maybe even tell me how it can be done quicker. At the master: RESET MASTER; FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS; And copy the values of the result of the last command somewhere. Wihtout closing the connection to the client (because it would release the read lock) issue the command to get a dump of the master: mysqldump mysq Now you can release the lock, even if the dump hasn't end. To do it perform the following command in the mysql client: UNLOCK TABLES; Now copy the dump file to the slave using scp or your preferred tool. At the slave: Open a connection to mysql and type: STOP SLAVE; Load master's data dump with this console command: mysql -uroot -p < mysqldump.sql Sync slave and master logs: RESET SLAVE; CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=98; Where the values of the above fields are the ones you copied before. Finally type START SLAVE; And to check that everything is working again, if you type SHOW SLAVE STATUS; you should see: Slave_IO_Running: Yes Slave_SQL_Running: Yes That's it! At the moment i'm in the stage of copying the db from the master to the other two replication servers and it takes more than 6 hours to that point, isn't it too slow? The servers are connected through a 1gb switch.

    Read the article

  • files have no ownership permissions and can't assign ownership

    - by Force Flow
    I'm having problems with file permissions on a server 2008 R1 server. Office 2010 tmp files are being created, and don't have any security permissions assigned. They aren't being deleted, I can't assign ownership, and I can't delete them. I downloaded and ran the sysinternals tool handle.exe. When running it for the first time, handle64.exe was created, but not assigned any permissions. I cannot assign ownership and cannot delete it. Seemingly random files in random places don't seem to have any permissions assigned. Access is denied when attempting to change ownership to administrator or the administrators group. If I try to replace inheritable permissions of the folder these files are in, access is denied for the files with no permissions. I attempted to use subinacl to view the ownership information on the files that had no permissions, but access was denied here as well. I also tried setting the owner with setacl in an elevated cmd window, but access was denied as well. This problem only surfaced in the last few days, and I'm unsure as what the cause is or how to correct it.

    Read the article

  • Constant crashes in windows 7 64bit when playing games

    - by yx
    I've tried everything I can possibly think of in trying to fix this problem and I'm totally out of ideas, so any help would be appreciated: The problem: whenever I fire up a game, it works for a short while with no problems and then it would crash. Either its a hard crash, forcing me to reboot, or windows would report that the display driver has stopped working and recovered. Here is a list of things I've already tried: Drivers - tried the latest drivers (catalyst 9.12) as well as the stock drivers that came with the video card. Also have the latest BIOS/chipset Memtest - Ran Memtest86+ overnight, had no problems, the windows diagnostic tool also does not find any problems. Overheating - Video card/cpu temperatures are well below peak (42 and 31 Celsius receptively) PSU Voltage - CPUID shows that the voltage levels are all above what they should be. The PSU itself is only roughly 16 months old and is a good model. HDD - No errors when checked GPU - Brand new (replaced previous card since I thought it was the problem, apparently not) Overclocking - Everything is at stock levels, memory voltage is set to manufacturer's standard Specs: Motherboard: ASUS P5Q Pro CPU: Core 2 Duo E8400 3.0 ghz OS: Windows 7 home premium 64 bit Memory: Mushkin Enhanced 4GB DDR2 GPU: Sapphire HD 5850 1GB PSU: SeaSonic M12 600W ATX12V DirectX: DX11 Event Viewer after a crash always has these logged: A fatal hardware error has occurred. Reported by component: Processor Core Error Source: Machine Check Exception Error Type: Bus/Interconnect Error Processor ID: 1 The details view of this entry contains further information. A fatal hardware error has occurred. Reported by component: Processor Core Error Source: Machine Check Exception Error Type: Bus/Interconnect Error Processor ID: 0 The details view of this entry contains further information. A previous card that I had (4850x2) also had these errors, so I changed video cards, but the same thing is happening.

    Read the article

  • WinHttpCertCfg not importing certificate

    - by Ramon Zarazua
    I need to setup a deployment script that imports an SSL certificate that my service uses. I have tried importing with WinHttpCertCfg and with CertMgr to no avail. Here are the command-line arguments I have tried to use with both: winhttpcertcfg.exe -i <certname>.pfx -c LOCAL_MACHINE\My -p <password> -a <user service runs as> and CertMgr.exe -add -all -s -r localMachine -c <cert name> My It seems from what I have investigated that CertMgr does not allow you to import certificates with a password, so I'd rather get winhttpcertcfg working. When I run them I get the following output: WinHttpCertCfg: Microsoft (R) WinHTTP Certificate Configuration Tool Copyright (C) Microsoft Corporation 2001. CertMgr: CertMgr Succeeded However, when I look into the local machine certificates in MMC, try to load them from my service, or list it out through winhttpcertcfg, or even looking at the registry in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\MY\Certificates it is not found. I have tried all of the following: If I install the cert manually (Through CertMgr.msc dialogs) it works. The user installing is running as administrator The user installing has full access on the certificate The tools print out an error when something is wrong (wrong password) Tried it in multiple machines (All of them server 2008 R2) At this point I am officially out of ideas. Thank you.

    Read the article

  • Kickstart: Serve dynamic kickstart images via a CGI or PHP script?

    - by Stefan Lasiewski
    I'd like to kickstart a couple dozen RHEL6/SL6 servers. However, some of these servers are different and I don't want to create a new ks.cfg file for each class of server. Are there any products which can generate a Kickstart file dynamically on the fly, from a template? For example, if I append a line like this to the KERNEL: APPEND ks=http://192.168.1.100/cgi-bin/ks.cgi Then the script ks.cgi can determine what host this is (Via the MAC address), and print out Kickstart options which are appropriate for that host. I could optionally override some options by passing parameters to the script, like this: APPEND ks=http://192.168.1.100/cgi-bin/ks.cgi?NODETYPE=production&IP=192.168.2.80 After we kickstart the server, we activate Cfengine/Puppet on this system and manage the system using our favorite Configuration Management product. We're experimenting with xCAT but it is proving too cumbersome. I've looked into Cobbler, but I'm not sure it does this. Update: A roll-your-own solution is discussed in the O'Reilly book: Managing RPM-Based Systems with Kickstart and Yum, Chapter 3. Customizing Your Kickstart Install Dynamic ks.cfg, which echos some of the comments in this thread: To implement such a tool is beyond the scope of this Short Cut, but I can walk through the high-level design. Any such solution would mix a data store (the things that change) with a templating solution (the things that don’t change). The data store would hold the per-machine data, such as the IP address and hostname. You would also need a unique identifier, perhaps the hostname, such that you could pick up a given machine’s data. The data store could be a flat file, XML data, or a relational database such as PostgreSQL or MySQL. In turn, to invoke the system, you pass a machine’s unique identifier as a URL parameter. For example: boot: linux ks=http://your.kickstart.server/gen_config?host-server25 In this example, the CGI (or servlet, or whatever) generates a ks.cfg for the machine server25. But where, oh where, is the code for ks.cgi?

    Read the article

  • Generating new SID for Windows 7 cloned partition in Linux?

    - by Jack
    So I've read that the proper way to clone a Windows 7 partition is to run a Sysprep after the clone is complete. For MANY reasons, this is not possible the way we are cloning these drives (long story short, the drive should be fully up and running after we clone it, with all the settings already there and requiring no user intervention; and no, not even an answer file would work because the way we customize all the Win7 settings is complex and we do not want the user touching the settings). I understand Microsoft will not support Windows 7 clones if it is not sysprepped and that is fine for us. Acronis recovery tools get around this by ticking an option called "Create new NT signature", which resets the SID and GUID on any restore. Symantec has a tool called Ghostwalker which does the same thing. However, we are looking for a way to do this in Linux because we want to use open source tools to do the imaging (fsarchiver, partclone, etc. basically the same tools Clonezilla uses internally to clone NTFS partitions). The question is, if we clone using these tools in Linux, how would we generate a new SID thereafter (without the use of sysprep)? Is there any way to do it within a Linux environment? The whole image process is automated so if it is a simple command that I can just throw in my shell script, that would be even better. Of course, it would be nice to know if this is even possible. Any ideas? EDIT: Forgot to mention that the target machines we are restoring the image on are EXACTLY the same.

    Read the article

  • Any non-custom way to manage iptables with fail2ban and libvirt+kvm?

    - by Peter Hansen
    I have an Ubuntu 9.04 server running libvirt/kvm and fail2ban (for SSH attacks). Both libvirt and fail2ban integrate with iptables in different ways. Libvirt uses (I think) some XML config and during startup (?) configures forwarding to the VM subnet. Fail2ban installs a custom chain (probably at init) and periodically modifies it to ban/unban probable attackers. I also need to install my own rules to forward various ports to servers running in VMs and on other machines, and set up rudimentary security (e.g. drop all INPUT traffic except the few ports I want open), and of course I'd like the ability to add/remove rules safely without restarting. It seems to me iptables is a powerful tool that's sorely lacking some sort of standardized way of juggling all this stuff. Every project, and every sysadmin, seems to do it differently! (And I think there's lots of "cargo cult" admin going on here, with people cloning crude approaches like "use iptables-save like so".) Short of figuring out the gory details of exactly how both of these (and potentially other) tools manipulate the netfilter tables, and developing my own scripts or just manually executing iptables commands, is there any way to safely work with iptables while not breaking the functionality of these other tools? Any nascent standards or projects defined to bring sanity to this area? Even a helpful web page I missed that might cover at least these two packages together?

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >