Search Results

Search found 18729 results on 750 pages for 'edit'.

Page 512/750 | < Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >

  • Iptables - Redirect outbound traffic on a port to inbound traffic on 127.0.0.1

    - by GoldenNewby
    I will be awarding a +100 bounty to the correct answer once it is available in 48 hours Is there a way to redirect traffic set to go out of the server to another IP, back to the server on localhost (preferably as if it was coming from the original destination)? I'd basically like to be able to set up my own software that listens on say, port 80, and receives traffic that was sent to say, 1.2.3.4. So as an example with some code. Here would be the server: my $server = IO::Socket::INET->new( LocalAddr => '127.0.0.1', LocalPort => '80', Listen => 128, ); And that would receive traffic from the following client: my $client = IO::Socket::INET->new( PeerAddr => 'google.com', PeerPort => '80', ) So rather than having the client be connecting to google.com, it would be connecting to the server I have listening on localhost for that same server. My intention is to use this to catch malware connecting to remote hosts. I don't specifically need the traffic to be redirected to 127.0.0.1, but it needs to be redirected to an IP the same machine can listen to. Edit: I've tried the following, and it doesn't work-- echo 1 > /proc/sys/net/ipv4/ip_forward iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 127.0.0.1:80 iptables -t nat -A POSTROUTING -j MASQUERADE

    Read the article

  • Data recovery on a data HDD (no OS)

    - by aCuria
    I am helping a family member with a dead hard disk. It is a seagate 200Gb 3.5" HDD in one of those old-school external enclosures. The problem was that windows failed to detect the hard disk when plugged in through USB. I removed the hard disk from its enclosure, and plugged it into my desktop PC. The BIOS does detect it upon POST, but unfortunately windows 7 would refuse to boot. It will get stuck on the loading screen with the glowing windows logo. Safe mode doesn't help either. What options do I have before going for some professional data recovery? edit: Someone modified the Title to something completely different from what I was asking, i just changed it back. 1) 2 HDD drives, DiskA(Dead), DiskB(my OS disk) 2) when B is connected to my system, everything works fine 3) when A AND B is connected, failure to boot. POSTs fine, but windows wont load 4) A has NO OS, its PURE data. It came from an EXTERNAL HDD enclosure which doesnt belong to me, and im trying to do data recovery.

    Read the article

  • How to transfer files via infrared on Linux?

    - by arielnmz
    I know this is a way too old technology but I've got some files inside a very old cellphone that I need to transfer to a very old computer. So far my Infrared USB device works well, it's detected by the machine (lsusb output): Bus 002 Device 002: ID 0df7:0620 Mobile Action Technology, Inc. MA-620 Infrared Adapter I've tried to send the file over MMS and even email (it lacks bluetooth, not to mention USB). But this cellphones's firmware doesn't let me attach the files. The file was originally transfered via IrDA, and it only has an internal memory (a whole 2 million bytes! whoa!). I found a package called irda-utils, but it seems that there are only two executables: irdaping and irdadump. I think the dump utility might do the job (which as far as I can see it's kind of a version of tcpdump but for IrDA), but I don't even know how to process the received frames. Could this question may be what I'm looking for? EDIT While reading through the Linux Infrared HOWTO I found about the OpenObex project, which may be what I'm looking for...

    Read the article

  • Tell VLC where to look for plugins.dat file

    - by puk
    I am trying to build vlc from source (I will include installation script below), but when I try to run vlc I get the following error main libvlc warning: cannot read /home/user/downloads/vlc3/vlc/src/.libs/vlc/plugins/plugins.dat (No such file or directory) Why is it even looking in that non existant directory? The plugins.dat file is in /usr/lib/vlc/plugins/. I tried export VLC_PLUGIN_PATH=/usr/lib/vlc/plugins/ But it still looks in that non existent path. I can create a symbolic link, but that is a terrible way to do it. If in 6 months I delete my downloads folder, all of a sudden my vlc will break. Here is the script I am running to install: ./configure --enable-rpi-omxil --enable-dvbpsi --enable-x264 --enable-xcb --with-x --enable-xvideo --enable-sdl --enable-avcodec --enable-avformat --enable-swscale --enable-mad --enable-a52 --enable-libmpeg2 --enable-dvdnav --enable-faad --enable-vorbis --enable-ogg --enable-theora --enable-mkv --enable-freetype --enable-fribidi --enable-speex --enable-flac --enable-live555 --enable-caca --enable-skins2 --enable-alsa --enable-ncurses --enable-debug --enable-lirc --enable-live555 --enable-shout --enable-taglib --enable-vcdx --enable-realrtsp --enable-svg --enable-dvdread --enable-dc1394 --enable-twolame --enable-dirac --enable-aa --enable-jack --enable-bluray --enable-opencv --enable-sftp --enable-pulse --enable-projectm --enable-vsxu --enable-atmo --enable-glspectrum '--with-extra-libs=/usr/local/lib' '--with-extra-includes=/usr/local/include' '--x-libraries=/usr/local/lib' '--x-includes=/usr/local/include' '--prefix=/usr/local' '--mandir=/usr/local/man' '--infodir=/usr/local/info/' EDIT: I am using the following version: VLC media player 2.2.0-git Weatherwax (revision 2.1.0-git-1168-g5804dd1) And the --plugin-path option is no longer supported.

    Read the article

  • running automated fsck on remote server

    - by GriffinHeart
    I had another question about df, and now i came to conclusion i need to run fsck my partition, i've been reading about it and would like some advice, if possible. The situation is like this, no physical access to the server and i want to run fsck. from what i read i just need to touch /forcefsck and when i reboot it will run fsck. My question is, at its basis, with what arguments will the fsck run? Will it need user input to correct errors, etc? and after running will it save a log of what happened? if this was how it ran it would be perfect, anyway of enforcing that on reboot? fsck -v -p /machine/disk/p1 2>&1 > fscklog.txt Also here they describe this: it's also a good idea on debian and debian-derivatives like ubuntu to edit /etc/default/rcS on remote servers and set "FSCKFIX=yes" that adds "-y" to the boot time fsck, so it doesn't risk the remote server being stuck waiting for someone to login at the console and run fsck. But on Centos that doesn't seem to exist I only have ssh access at the moment so that is why i'm being so picky with it. here's some info about disks and mounted volumes on the server: http://pastebin.centos.org/33314 Thanks.

    Read the article

  • Rsync: General file/folder synchronization

    - by Rey Leonard Amorato
    I have a file server, which is in-charge of pulling a folder tree from multiple workstations on a daily basis. My current method for this is by using rsync, (which works pretty well provided directory names and/or files remain the same) however, when files are renamed or moved about within subdir1, rsync will copy them over to the server, creating duplicates. I have to manually find and delete extraneous files/folders that had been left on the server during previous syncs. Note that I cannot use rsync's --delete flag because any sync from a workstation will then mirror that particular folder tree, instead of merging them to the server. Visual diagram: Server: Workstation1 Workstation2 Workstation(n) Folder* Folder* Folder* Folder* -subdir1 -subdir1 -subdir1 -subdir(n) -file1 -file1 -file2 -file(n) -file2 -file(n) Is there a simple script (preferably in bash, nothing fancy) that can accomplish the deletion of the extraneous files/folders in the event a file is renamed or moved to a different subdir? Is there a different program, much like rsync that can accomplish this task autonomously and in a much simpler manner? I have looked at unison, but I did not like the fact that it keeps a local database for the syncing info. Any tips at all as to how I am supposed to tackle this? Thank you in advanced for your help. EDIT: I have tried unison just recently and I can safely say it is out of the question now. unison is a bi-directional synchronization tool and from my testing, it mirrors the files existing on the server to all workstations. - This is unwanted. preferably, i would want files/folders to stay within their respective workstations and just merge to the server. AKA uni-directional sync; but with renames/moves propagated to the server. I might have to look into Git/Mercurial/Bazaar as mentioned by kyle, but still unsure if they are fit for the job.

    Read the article

  • Difference between "traceroute" and "traceroute -U"

    - by AndiDog
    The manpage of traceroute says that the "-U" parameter (UDP probing) is the default, but I'm getting different results every time. With "-U": traceroute -U www.univ-paris1.fr traceroute to www.univ-paris1.fr (193.55.96.121), 30 hops max, 60 byte packets [...] 13 rap-vl165-te3-2-jussieu-rtr-021.noc.renater.fr (193.51.181.101) 59.445 ms 56.924 ms 56.651 ms [...] 18 * paris1web.univ-paris1.fr (193.55.96.121) 23.797 ms 23.603 ms but the normal traceroute gives me another result (never reaches the final node) - it's either "!X" or just exits after the maximum of 30 hops: traceroute www.univ-paris1.fr traceroute to www.univ-paris1.fr (193.55.96.121), 30 hops max, 60 byte packets [...] 11 te1-1-paris1-rtr-021.noc.renater.fr (193.51.189.38) 28.147 ms 28.250 ms 28.538 ms [... non-responding nodes ...] 28 site-1.03-jussieu.rap.prd.fr (195.221.126.58) 85.941 ms !X * * Note: I tried this very often and always get the same results. The path in my local network is always the same. So what does the "-U" parameter actually change here? I'm especially interested what the reason for "!X" could be (communication administratively prohibited). EDIT: If that helps, paris-traceroute gives me the following for the last hop: 14 P(1, 6) site-1.03-jussieu.rap.prd.fr (195.221.126.58) 34.938 ms !5 !T2 which means that node discards the packet with TTL=2 and returns an unknown message (not "destination unreachable" or the like).

    Read the article

  • How to create custom content for nginx error 502 page, keep origin url on browser

    - by user123862
    i'm trying to get custom language and message for nginx error page but keep url on browser.. not success for eg: i go to url : xaluan.com/aaa/bbb.html on the time server down.. nginx will show error 502. with the same url but custom message as my language. test 1. I created a custom page at /usr/local/nginx/html/205.html as following config but it show on web site when error is default nginx error at domain.com/50.html ( the content of webpage not same as i created) error_page 502 /502.html; location = /502.html { root /usr/local/nginx/html; } test 2. Then i create same page at my www domain folder /home/xaluano/public_html/502.html but this keep redirect me to root domain.com/502.html the content now same as i created. but.. the url still not as i need error_page 502 /502.html; location = /502.html { root /home/xaluano/public_html; internal; } EDIT UPDATE for more detail 10/06/2012 please download my nginx config http://pastebin.com/7iLD6WQq and vhost config following: http://pastebin.com/ZZ91KiY6 == the case test.. if apache httpd service stop: #service httpd stop then open browser go to: xaluan.com/modules.php?name=News&file=article&sid=123456 I will see the 502 error with the same url on browser address == Custome error page I need the config which help when apache fail .. will show the custom message tell user wail for 1 minute for service back then refress current page with same url ( refresh I can do easy by javascript ), Nginx dosent change url so java-script can work out. any help will be great.. thank in advance

    Read the article

  • Windows Server 2008 R2 install reboots unexpectedly during "Completing installation" phase

    - by knda
    I am attempting to install Windows Server 2008 R2 onto a Cisco UCS C201 M2 rack mounted server but am having major difficulties and wondering if anyone has some insight or items they could recommend for me to look at to get this one resolved. Installation is being attempted via the Cisco remote console (using CIMC's Virtual dvd-rom).. following the first phase of Setup where the installation files are copied to the target hard drive, then a reboot occurs to load Setup from the HDD, mid-way in the "Completing Installation" phase the system then reboots unexpectedly. System configuration Cisco UCS C201 M2 (2RU rack mounted server) 16GB RAM, 2x 73GB 15K SAS, 4x 300GB 10k SAS Add-on cards - Intel quad-port GigE card (no fibre channel cards) Storage - LSI MegaRAID SAS 9261-8i. onboard SATA is disabled (no SATA drives connected) KVM - Belkin No physical DVD-ROM.. :( I have... Run memtest86+, no RAM faults Disabled/enabled SATA support (BIOS) Attempted install from USB DVD-ROM, no effect Attempted unattended install scripted via Cisco Configuration Manager DVD provided Removed Belkin KVM in case that was causing drama Discovered that the Cisco website is "awesome" for searching for PDFs/Drivers cough, reverted back to Google Downloaded latest LSI drivers from LSI's site and used during Server 2008 install checked Windows ISO against checksum's from MS site checked Windows ISO by using it for an install in a VM Running out of ways to troubleshoot this as I am not sure how to enable any sort of 'verbose' mode during the setup process. Next step I have planned is to remove the Intel NIC and try the installation again.. Edit: Problem was the "Cisco INTEL QUAD PT GBE" (1000/PT) .. will have to see if this card is faulty or if it's just drivers.. thanks for the help.

    Read the article

  • Updated XAMPP with MySQL, all my tables are missing

    - by user371699
    I just updated XAMPP to a newer version, which included updating MySQL from 5.5 to 5.6. Using phpMyAdmin, however, all of my tables within my databases still appear on the left navigation panel, but the main window shows that all my databases are empty (except for information_schema, and a couple other default tables.) Clicking on a table in the navigation panel gives me a "table doesn't exist" message. It does looks like information_schema.tables doesn't have my tables, either. Can anyone assist me with this? I did make a complete backup of all my databases before the upgrade, but I first want to see if I can fix this the "normal" way. Furthermore, I'm not sure if the MySQL upgrade involved making changes to the information/performance databases, so I don't know if I can restore the old ones. Thank you. EDIT: Continuing my searching, I realized that only the INNODB databases are missing. I've tried running the following with no avail: /opt/lampp/bin $ sudo ./mysql_install_db --basedir=/opt/lampp and /opt/lampp/bin $ sudo ./mysql_install_db --basedir=/opt/lampp --datadir=/opt/lampp/var/mysql The my.cnf file in /opt/lampp/etc contains the following InnoDB settings: innodb_data_home_dir = /opt/lampp/var/mysql/ innodb_data_file_path = ibdata1:10M:autoextend innodb_log_group_home_dir = /opt/lampp/var/mysql/ # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 16M # Deprecated in 5.6 #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size innodb_log_file_size = 5M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 50 What could possibly be wrong? Why is the information_schema not updating correctly? It looks like /opt/lampp/var/mysql has all my tables in it within the database directories, but they're still not showing up in information_schema.

    Read the article

  • Raid-5 Performance per spindle scaling

    - by Bill N.
    So I am stuck in a corner, I have a storage project that is limited to 24 spindles, and requires heavy random Write (the corresponding read side is purely sequential). Needs every bit of space on my Drives, ~13TB total in a n-1 raid-5, and has to go fast, over 2GB/s sort of fast. The obvious answer is to use a Stripe/Concat (Raid-0/1), or better yet a raid-10 in place of the raid-5, but that is disallowed for reasons beyond my control. So I am here asking for help in getting a sub optimal configuration to be as good as it can be. The array built on direct attached SAS-2 10K rpm drives, backed by a ARECA 18xx series controller with 4GB of cache. 64k array stripes and an 4K stripe aligned XFS File system, with 24 Allocation groups (to avoid some of the penalty for being raid 5). The heart of my question is this: In the same setup with 6 spindles/AG's I see a near disk limited performance on the write, ~100MB/s per spindle, at 12 spindles I see that drop to ~80MB/s and at 24 ~60MB/s. I would expect that with a distributed parity and matched AG's, the performance should scale with the # of spindles, or be worse at small spindle counts, but this array is doing the opposite. What am I missing ? Should Raid-5 performance scale with # of spindles ? Many thanks for your answers and any ideas, input, or guidance. --Bill Edit: Improving RAID performance The other relevant thread I was able to find, discusses some of the same issues in the answers, though it still leaves me with out an answer on the performance scaling.

    Read the article

  • What breaks in a Windows domain if a member has a high time skew?

    - by Ryan Ries
    It's taken for granted by most IT people that in a Windows domain, if a member server's clock is off by more than 5 minutes (or however many minutes you've configured it for) from that of its domain controller - logons and authentications will fail. But that is not necessarily true. At least not for all authentication processes on all versions of Windows. For instance, I can set my time on my Windows 7 client to be skewed all to heck - logoff/logon still works fine. What happens is that my client sends an AS_REQ (with his time stamp) to the domain controller, and the DC responds with KRB_AP_ERR_SKEW. But the magic is that when the DC responds with the aforementioned Kerberos error, the DC also includes his time stamp, which the client in turn uses to adjust his own time and resubmits the AS_REQ, which is then approved. This behavior is not considered a security threat because encryption and secrets are still being used in the communication. This is also not just a Microsoft thing. RFC 4430 describes this behavior. So my question is does anyone know when this changed? And why is it that other things fail? For instance, Office Communicator kicks me off if my clock starts drifting too far out. I really wish to have more detail on this. edit: Here's the bit from RFC 4430 that I'm talking about: If the server clock and the client clock are off by more than the policy-determined clock skew limit (usually 5 minutes), the server MUST return a KRB_AP_ERR_SKEW. The optional client's time in the KRB-ERROR SHOULD be filled out. If the server protects the error by adding the Cksum field and returning the correct client's time, the client SHOULD compute the difference (in seconds) between the two clocks based upon the client and server time contained in the KRB-ERROR message. The client SHOULD store this clock difference and use it to adjust its clock in subsequent messages. If the error is not protected, the client MUST NOT use the difference to adjust subsequent messages, because doing so would allow an attacker to construct authenticators that can be used to mount replay attacks.

    Read the article

  • Why can't I connect to computers on my network using our external IP address?

    - by Kivin
    My home network is serviced by an ADSL line. The modem is in bridged mode. The router performs the PPPoE. Three computers are connected to the router: two wired Windows 7 boxes and a Ubuntu Linux box over wifi. The computers are hosting various forms of services including FTP and HTTP. The router has port forwarding mapped from the relevant ports to the reserved IP addresses for the computers. If I attempt to connect to a server inside the network, such as ftp://67.xx.xxx.xxx from inside the network, the request times out. However if I connect using the internally mapped address, such as ftp://192.168.0.100, all is well. This is a nuisance for setting up software, especially on the laptop which needs to be able to phone home from anywhere, and I just don't have enough expertise with networking to know why this is occurring to even have a clue whether it can be solved or not. edit: It should be noted that the servers can be accessible outside the network - say, at the starbucks across the street - perfectly fine, using the ISP provided address and the appropriate port.

    Read the article

  • Configure an Azure VM for Dynamic DNS for Cloud Services

    - by Adam
    I am trying to setup an azure VM with proper DNS to allow multiple cloud services to communicate across cloud service boundaries. As I understand it, I need to provide my own DNS server. I do not have any on-premise infrastructure, so I am trying to configure an Azure VM to act as my DNS. This SO question (http://stackoverflow.com/questions/21858926/azure-how-to-connect-one-cloud-service-with-other-in-one-virtual-network) is very similar to my setup. This article (http://msdn.microsoft.com/en-us/library/windowsazure/jj156088.aspx) describes my particular case: Name resolution between virtual machines and role instances located in the same virtual network, but different cloud services Here is what I have done: Created Azure Virtual Network and declared subnets for each of my cloud services. Created an Azure VM (Windows 2012 R2) with DNS enabled RDP to the VM and enabled the DNS role and installed features Added the appropriate NetworkConfiguration xml section to each of my cloud services .csfg files Re-deployed my cloud services I have verified that I setup the virtual network and networkconfiguration properly because my cloud service hosts are able to communicate with each other if I use the internal ips. However, name resolution doesn't appear to be working, and it doesn't appear that my cloud service roles can communicate with my DNS server. How do I configure my VM so that my different cloud services roles register themselves with my DNS server? EDIT: I think I am 1 step closer to getting this to work. The cloud services that I was using are in an old affinity group which is not supported by VMs, so I was unable to add my VM into my virtual network. I created a new VNET in a new affinity group with my VM added into it. However, I still don't know how to configure the azure VM's DNS server so that the cloud services register themselves for name resolution. Also, an added bonus guaranteed to get a +1 would be to explain if it is possible to register a DNS entry for the VIP for an internal endpoint of my cloud services so we can get load balancing. Thanks!

    Read the article

  • USB mouse disconnecting and reconnecting in windows and linux

    - by Kalak
    I have a problem similar to what is described at "Why is my USB mouse disconnecting and reconnecting randomly and often?" except is is happening in both Windows 7 and Linux (Ubuntu 12/04TLS, fully patched), multiple mice, multiple OSs. It stops responding to input for about 3-5 seconds, then starts responding again. It's more frequent and lasts longer when running games (TF2, L4D, Dishonored, Borderlands 2, and more), but happens when just running the OS as well. I was hoping it was the motherboard so I bought a USB 2.0 PCI card to try that, but it's still happening. I've stripped it down to just the keyboard and mouse (different keyboard too just in case the keyboard was the problem), but it's still happening. All the hardware (mice and keyboards) work fine on other computers. I have literally pulled the mouse and keyboard out and plugged them into another computer (laptop) and re-joined the online game and had no problem with the keyboard and mouse combo that just failed on my gaming rig. Please no driver / Windows or Linux only suggestions, as that wouldn't effect both OSs. edit: known good mice I've brought home are now going bad. I suspect the hardware is messed up (voltage?) and has been frying the mice.

    Read the article

  • Kerberos & localhost

    - by Alex Leach
    I've got a Kerberos v5 server set up on a Linux machine, and it's working very well when connecting to other hosts (using samba, ldap or ssh), for which there are principals in my kerberos database. Can I use kerberos to authenticate against localhost though? And if I can, are there reasons why I shouldn't? I haven't made a kerberos principal for localhost. I don't think I should; instead I think the principal should resolve to the machine's full hostname. Is that possible? I'd ideally like a way to configure this on just one server (whether kerberos, DNS, or ssh), but if each machine needs some custom configuration, that'd work too. e.g $ ssh -v localhost ... debug1: Unspecified GSS failure. Minor code may provide more information Server host/[email protected] not found in Kerberos database ... EDIT: So I had a bad /etc/hosts file. If I remember correctly, the original version I got with Ubuntu had two 127.0. IP addresses, something like:- 127.0.0.1 localhost 127.0.*1*.1 hostname For no good reason, I'd changed mine a long time ago to: 127.0.0.1 localhost 127.0.*0*.1 hostname.example.com hostname This seemed to work fine with everything until I tried out ssh with kerberos (a recent endeavour). Somehow this configuration led to sshd resolving the machine's kerberos principal to "host/localhost@\n", which I suppose makes sense if it uses /etc/hosts for forward and reverse dns lookups in preference to external dns. So I commented out the latter line, and sshd magically started authenticating with gssapi-with-mic. Awesome. (Then I investigated localhost and asked the question)

    Read the article

  • Shared printer stops working on clients but not host computer

    - by Tony
    I have a Brother MFC-7420 USB all-in-one laser printer. It is plugin via USB to my Windows 7 x64 machine. I have it shared to a few users on my home network. My wife's laptop running Vista x64 can normally print fine to the printer. However it seems that every day or two, when she pushes print on a something it just sits in her laptop's print queue and never makes it to my desktop. The only thing that seems to fix this is if she restarts her laptop. Not a big deal but this problem is sort of annoying. I don't know if this affects but the laptop is put into hibernate at night and I turn my desktop off at night. Does anyone know why this happens and if there is something easy to do to fix the problem besides restart the computer? EDIT: I was thinking that maybe my wife's laptop loses its connection to the printer. Is there a way to reset a connection to a shared printer? Or maybe reauth with the printer?

    Read the article

  • cannot log into mysql locally

    - by Lostsoul
    When I try to log into mysql locally using the command: mysql -u root -p I get this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) I can access the server remotely(not as root) and my web pages are using the mysql fine, but locally I cannot log on(which I need because I need to create some users). Only change I made was to attach another drive to the server and move the sql data there. Here's my.cnf [mysqld] datadir=/media/ephemeral0/data/mysql socket=/media/ephemeral0/data/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 # adding more config skip-external-locking long_query_time=1 slow_query_log slow_query_log_file=/var/log/log-slow-queries.log log-bin=mysql-bin server-id= 1 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid myisam_recover_options I read I need to edit the socket info in my.cnf to make sure it points to the right socket file..I double checked and the file exists(although it starts with an S when I do ls -l "srwxrwxrwx 1 mysql mysql 0 Jun 21 03:43 mysql.sock"). I'm not really sure how to resolve this. I have tried to reboot and ran yum update to make sure I was running the latest packages. Please help!

    Read the article

  • DNS Resolution doesn't work after uninstalling Cisco VPN & Deterministic Network Enhancer in Win 7

    - by Craig M
    I just upgraded my home PC to Windows 7 Ultimate 32 bit. After trying various methods to get the Cisco VPN client to work, I gave up and decided to just run it in XP mode. The last steps I tried were in this article ( http://social.technet.microsoft.com/Forums/en-US/w7itproappcompat/thread/d880dfe5-7f44-4955-8620-2a9355d8ea8b/ ) After that, I uninstalled the Cisco client and rebooted. I uninstalled the Deterministic Network Enhancer and rebooted again. Both uninstalled successfully, but now I'm not able to resolve any DNS. The only way I can resolve DNS is to reinstall the DNE, reboot, and uninstall the DNE. Then I am able to resolve DNS lookups until I reboot again. Once it's rebooted, no more DNS. Any ideas? Edit: I completely forgot I'd asked this question until harrymc posted his answer. I've since found out that to fix this problem, I need to disable my Local Area Connection and re-enable it. Once I do that I have no trouble making network connections until the next time I reboot at which point I repeat the process. It's annoying, but manageable since I reboot very infrequently.

    Read the article

  • Intel Atom overheating in ASUS EEE Box 1501P

    - by Sergey L.
    I have had an ASUS EEE Box 1501P for just a little bit over a year. Of course it breaks 2 months after the warranty runs out. http://www.asus.com/Eee/EeeBox_PC/EeeBox_PC_EB1501P/ I have been using the box as a Home Media Center. Running mostly 24/7 often pausing a video overnight. Since last week the fan started running extremely loud. After some digging I found that the Intel Atom CPU in it is overheating and the built-in sensor is reporting temperatures way over 105°C. This got me worried, so I took the unit apart. Completely vacuumed the heat sink, oiled the fan, but the unit is still showing the same behaviour. After turning it on and just observing the hardware monitor in the BIOS the temperature slowly rises from 40°C to over 95°C in appx 5 min. I am running the newest BIOS and a lightweight Linux OPENELEC OS with XBMC on it. Now I am wondering if it could be a faulty heat sensor in the Atom. Recommended running temperature is up to 85°C, but I have not detected any performance hits when running at the above mentioned 105°C and there seem to be no software faults. How can an Atom with an attached heat sink and a fan running at full capacity even get this hot in the first place at 0 load? Aren't those things designed to generate virtually no heat? Could it be a faulty heat sensor? What shall I try to fix this? I would prefer not to damage the CPU, since it is hard fused into the motherboard and cannot be replaced. I could remove the heat pipe/heat sink, but it is getting hot, so heat is properly transferring from the CPU to the heat pipe, the fan is running at full capacity, is recently oiled and warm air is making it out of the exhaust. Edit: One more note: The North-bridge (or whatever it is called nowadays) is on the same heat pipe.

    Read the article

  • can not connect to SQL running on amazon ec2 machine

    - by njj56
    I am using SQL managment studio 2008 running on an Amazon EC2 machine. I am unable to connect to the database in my asp.net application. The EC2 instance has been set to accept connections over the SQL port. I am also able to remote the machine as well as view websites hosted on the server. Listed below is part of the connection string relating to this instance. When the program is ran and this connection string is called, it returns tcp error 0 - no return response. it just times out. <add name="ProjectServer" connectionString="Data Source=*IP ADDRESS HERE*,1433;Initial Catalog=*Catalog Name*;User ID=IP-0A6ED514\Administrator;"/> I removed the ip and the catalog name for the example, but I am sure they are correct. The only thing that I could think may cause an error, is the differences in names between the user id and the server name - the server name is ip-0A6ED514\sharepoint but the user name is ip-0A6ED514\administrator when I log into the sql server manager on the EC2 instance. A password is not used. Not sure if I would need to leave in a blank string for password - also not sure if the difference between server name and user id to log in makes a difference. Any help is appreciated. Thank you. update - when this connection string is used with out the port, i get tcp provider error 40 - when the port is in there, i get error 0 edit- the sql server is using windows authentication - does this make a difference? Usually I always use SQL server authentication

    Read the article

  • Configuring vlans on Cisco SG200 series switch with Ubuntu server

    - by nixnotwin
    I created a vlan on Ubuntu with vconfig tool with 21 as id and eth1 as the host port. I connected eth1 to one of the ports on the swtich (GE23) as all ports trunk by default. In the webgui I created a vlan named test with the id 21 and I made GE2 are port as an access port. In port to vlan mapping I selected vlan 21 and added it port GE2 by selecting untagged option. I have assigned 192.168.1.1/24 as the ip of eth1.21 on Ubuntu. If I connect another cleint pc to GE2 port with a ip of 192.168.1.2/24 I cannot ping the server ip (192.168.1.1/24). Ping from server to client also does not work. I inspected packets that are sent out eth1 on the server and I could see the vlan 21 tag. And I connect the other end of the cable to a different Linux pc and inspected the packets but no vlan tags can be seen. What could be preventing me from getting vlans working? Edit 1 screenshots:

    Read the article

  • Setting up a dualboot by installing cloned partitions using clonezilla

    - by Nimjox
    I'm trying to setup a dual boot system where I have Windows 7 and Linux Mint. Here's the kicker both are partitions I've saved using Clonzezilla from different places and to make matters worse Linux Mint is formated as a LVM. I need both of these images specifically as windows is a corporate image that I must use and the other is a development image that took me a week to setup. I've gotten it almost all working but my issue is that I can't get clonezilla to not mess up the partition table of Windows when installing Mint or vise-vera. I can use the (-k1 option) which doens't copy the partition table but then I have a unusable partition when it clones and I'm not sure how to fix the partition table. Here's what I'm doing: Using Gparted to make partitions sda1 40GB ntfs (windows), sda2 extended 70GB, sda5 lvm2 pv 69.99 GB (Linux), sda3 500MB (GRUB) Clonezilla windows image into sda1 partition (keeping partition table) Clonezilla linux image into sda5 partition (not recreating partition table) After all that I can boot into windows using the default MBR. I can use rescue-repair cd to reinstall GRUB which will see Windows 7 but I can't get it to see the Linux OS. I'm thinking its because of the sda5 partition but I'm not sure any ideas on what I could do to get this working or where I might be going wrong. If there is any additional detail you need please let me know and I'll edit as this is a lot.

    Read the article

  • scp No such file or directory

    - by Joe
    I've a confusing question for which superuser doesn't seem to have a good answer, and neither google. I'm trying to scp a file from a remote server to my local machine. The command is this scp user@server:/path/to/source/file.gz /path/to/destination The error I get is: scp: /path/to/source/file.gz: No such file or directory user is my username on the server. The command syntax appears fine to me. ssh works fine and I can cd to the file and it doesn't seem to be an access control issue? Thanks; Edit: Thank you John. I spotted the issue. ls returned this: -r--r--r-- 1 nobody users 168967171 Mar 10 2009 /path/to/source/file.gz So, the file was on a read-only file system and user is able to read it but not scp. I just copied the file to a different directory and chown it and worked fine. It would be good if someone can explain why this is the case though.

    Read the article

  • Linux Mint 13 is not booting on dual boot computer

    - by Brian
    thanks in advance for your time. I have 2 hard drives in my computer a 300 GB drive which is my primary drive for windows 7 and a 1.5 TB drive that I'd used for storage. When I got it I partitioned 500 GB for use in Linux. So, I created a bootable USB and clicked the "Install by Current Operating System" option from Mint. It installed it to the free 500 GB like I'd hoped it would. Now, I can't get it to boot though. I've tried using EasyBCD to create the boot entry and it hangs on a black screen. Thanks. EDIT @ Ryhuk It presents a menu with two options 1) Windows and 2) Mint. This was a menu I created with easyBCD. When I select option 1 it boots to windows fine. When I select option 2 it hangs on a black screen with just a white bar flashing (Can't remember what its called, it marks the current cursor location on a text field) and won't respond to any key presses but alt ctrl del.

    Read the article

< Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >