Search Results

Search found 18729 results on 750 pages for 'edit'.

Page 512/750 | < Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >

  • Why can't I connect to computers on my network using our external IP address?

    - by Kivin
    My home network is serviced by an ADSL line. The modem is in bridged mode. The router performs the PPPoE. Three computers are connected to the router: two wired Windows 7 boxes and a Ubuntu Linux box over wifi. The computers are hosting various forms of services including FTP and HTTP. The router has port forwarding mapped from the relevant ports to the reserved IP addresses for the computers. If I attempt to connect to a server inside the network, such as ftp://67.xx.xxx.xxx from inside the network, the request times out. However if I connect using the internally mapped address, such as ftp://192.168.0.100, all is well. This is a nuisance for setting up software, especially on the laptop which needs to be able to phone home from anywhere, and I just don't have enough expertise with networking to know why this is occurring to even have a clue whether it can be solved or not. edit: It should be noted that the servers can be accessible outside the network - say, at the starbucks across the street - perfectly fine, using the ISP provided address and the appropriate port.

    Read the article

  • Raid-5 Performance per spindle scaling

    - by Bill N.
    So I am stuck in a corner, I have a storage project that is limited to 24 spindles, and requires heavy random Write (the corresponding read side is purely sequential). Needs every bit of space on my Drives, ~13TB total in a n-1 raid-5, and has to go fast, over 2GB/s sort of fast. The obvious answer is to use a Stripe/Concat (Raid-0/1), or better yet a raid-10 in place of the raid-5, but that is disallowed for reasons beyond my control. So I am here asking for help in getting a sub optimal configuration to be as good as it can be. The array built on direct attached SAS-2 10K rpm drives, backed by a ARECA 18xx series controller with 4GB of cache. 64k array stripes and an 4K stripe aligned XFS File system, with 24 Allocation groups (to avoid some of the penalty for being raid 5). The heart of my question is this: In the same setup with 6 spindles/AG's I see a near disk limited performance on the write, ~100MB/s per spindle, at 12 spindles I see that drop to ~80MB/s and at 24 ~60MB/s. I would expect that with a distributed parity and matched AG's, the performance should scale with the # of spindles, or be worse at small spindle counts, but this array is doing the opposite. What am I missing ? Should Raid-5 performance scale with # of spindles ? Many thanks for your answers and any ideas, input, or guidance. --Bill Edit: Improving RAID performance The other relevant thread I was able to find, discusses some of the same issues in the answers, though it still leaves me with out an answer on the performance scaling.

    Read the article

  • Tell VLC where to look for plugins.dat file

    - by puk
    I am trying to build vlc from source (I will include installation script below), but when I try to run vlc I get the following error main libvlc warning: cannot read /home/user/downloads/vlc3/vlc/src/.libs/vlc/plugins/plugins.dat (No such file or directory) Why is it even looking in that non existant directory? The plugins.dat file is in /usr/lib/vlc/plugins/. I tried export VLC_PLUGIN_PATH=/usr/lib/vlc/plugins/ But it still looks in that non existent path. I can create a symbolic link, but that is a terrible way to do it. If in 6 months I delete my downloads folder, all of a sudden my vlc will break. Here is the script I am running to install: ./configure --enable-rpi-omxil --enable-dvbpsi --enable-x264 --enable-xcb --with-x --enable-xvideo --enable-sdl --enable-avcodec --enable-avformat --enable-swscale --enable-mad --enable-a52 --enable-libmpeg2 --enable-dvdnav --enable-faad --enable-vorbis --enable-ogg --enable-theora --enable-mkv --enable-freetype --enable-fribidi --enable-speex --enable-flac --enable-live555 --enable-caca --enable-skins2 --enable-alsa --enable-ncurses --enable-debug --enable-lirc --enable-live555 --enable-shout --enable-taglib --enable-vcdx --enable-realrtsp --enable-svg --enable-dvdread --enable-dc1394 --enable-twolame --enable-dirac --enable-aa --enable-jack --enable-bluray --enable-opencv --enable-sftp --enable-pulse --enable-projectm --enable-vsxu --enable-atmo --enable-glspectrum '--with-extra-libs=/usr/local/lib' '--with-extra-includes=/usr/local/include' '--x-libraries=/usr/local/lib' '--x-includes=/usr/local/include' '--prefix=/usr/local' '--mandir=/usr/local/man' '--infodir=/usr/local/info/' EDIT: I am using the following version: VLC media player 2.2.0-git Weatherwax (revision 2.1.0-git-1168-g5804dd1) And the --plugin-path option is no longer supported.

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Windows 7 wireless not seeing any networks

    - by jkohlhepp
    I think I have managed to confuse Windows 7. When I did the install, I had the network cable plugged in to my router, but the wireless card was also enabled. During the install, Windows 7 seemed to see my wireless network and even asked me for the WEP key. I know that it used the WEP key because I initially entered an invalid one and it gave me an error. Then the network said "SoAndSoWireless Connected". However, when I unplug or disable my wired network card, then I have no internet, and it can't see any networks. When I plug in the wired network card, it says "SoAndSoWireless Connected". Under Network and Internet Network Connections I have "Local Area Connection" and "Wireless Network Connection". The wired one's status is "SoAndSoWireless" and the wireless status is "Not Connected". Also, the wireless connection can't seem to see any other wireless networks in the area and I know there are tons. My neighbors have several. I've somehow seemingly confused Windows 7 into thinking that my wired network card is my wireless card or something. Any ideas on how to un-confuse it? This is a desktop machine by the way, if it matters. EDIT: Ah, I think part of the problem is that I named my network accidentally the same as the name of the wireless network being broadcast by the wireless router. So that might be why it says that name on the hard-wired connection. Perhaps the drivers just are completely not working for the wireless card. Thanks, ~ Justin

    Read the article

  • Kerberos & localhost

    - by Alex Leach
    I've got a Kerberos v5 server set up on a Linux machine, and it's working very well when connecting to other hosts (using samba, ldap or ssh), for which there are principals in my kerberos database. Can I use kerberos to authenticate against localhost though? And if I can, are there reasons why I shouldn't? I haven't made a kerberos principal for localhost. I don't think I should; instead I think the principal should resolve to the machine's full hostname. Is that possible? I'd ideally like a way to configure this on just one server (whether kerberos, DNS, or ssh), but if each machine needs some custom configuration, that'd work too. e.g $ ssh -v localhost ... debug1: Unspecified GSS failure. Minor code may provide more information Server host/[email protected] not found in Kerberos database ... EDIT: So I had a bad /etc/hosts file. If I remember correctly, the original version I got with Ubuntu had two 127.0. IP addresses, something like:- 127.0.0.1 localhost 127.0.*1*.1 hostname For no good reason, I'd changed mine a long time ago to: 127.0.0.1 localhost 127.0.*0*.1 hostname.example.com hostname This seemed to work fine with everything until I tried out ssh with kerberos (a recent endeavour). Somehow this configuration led to sshd resolving the machine's kerberos principal to "host/localhost@\n", which I suppose makes sense if it uses /etc/hosts for forward and reverse dns lookups in preference to external dns. So I commented out the latter line, and sshd magically started authenticating with gssapi-with-mic. Awesome. (Then I investigated localhost and asked the question)

    Read the article

  • Setting up a dualboot by installing cloned partitions using clonezilla

    - by Nimjox
    I'm trying to setup a dual boot system where I have Windows 7 and Linux Mint. Here's the kicker both are partitions I've saved using Clonzezilla from different places and to make matters worse Linux Mint is formated as a LVM. I need both of these images specifically as windows is a corporate image that I must use and the other is a development image that took me a week to setup. I've gotten it almost all working but my issue is that I can't get clonezilla to not mess up the partition table of Windows when installing Mint or vise-vera. I can use the (-k1 option) which doens't copy the partition table but then I have a unusable partition when it clones and I'm not sure how to fix the partition table. Here's what I'm doing: Using Gparted to make partitions sda1 40GB ntfs (windows), sda2 extended 70GB, sda5 lvm2 pv 69.99 GB (Linux), sda3 500MB (GRUB) Clonezilla windows image into sda1 partition (keeping partition table) Clonezilla linux image into sda5 partition (not recreating partition table) After all that I can boot into windows using the default MBR. I can use rescue-repair cd to reinstall GRUB which will see Windows 7 but I can't get it to see the Linux OS. I'm thinking its because of the sda5 partition but I'm not sure any ideas on what I could do to get this working or where I might be going wrong. If there is any additional detail you need please let me know and I'll edit as this is a lot.

    Read the article

  • cannot log into mysql locally

    - by Lostsoul
    When I try to log into mysql locally using the command: mysql -u root -p I get this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) I can access the server remotely(not as root) and my web pages are using the mysql fine, but locally I cannot log on(which I need because I need to create some users). Only change I made was to attach another drive to the server and move the sql data there. Here's my.cnf [mysqld] datadir=/media/ephemeral0/data/mysql socket=/media/ephemeral0/data/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 # adding more config skip-external-locking long_query_time=1 slow_query_log slow_query_log_file=/var/log/log-slow-queries.log log-bin=mysql-bin server-id= 1 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid myisam_recover_options I read I need to edit the socket info in my.cnf to make sure it points to the right socket file..I double checked and the file exists(although it starts with an S when I do ls -l "srwxrwxrwx 1 mysql mysql 0 Jun 21 03:43 mysql.sock"). I'm not really sure how to resolve this. I have tried to reboot and ran yum update to make sure I was running the latest packages. Please help!

    Read the article

  • Intel Atom overheating in ASUS EEE Box 1501P

    - by Sergey L.
    I have had an ASUS EEE Box 1501P for just a little bit over a year. Of course it breaks 2 months after the warranty runs out. http://www.asus.com/Eee/EeeBox_PC/EeeBox_PC_EB1501P/ I have been using the box as a Home Media Center. Running mostly 24/7 often pausing a video overnight. Since last week the fan started running extremely loud. After some digging I found that the Intel Atom CPU in it is overheating and the built-in sensor is reporting temperatures way over 105°C. This got me worried, so I took the unit apart. Completely vacuumed the heat sink, oiled the fan, but the unit is still showing the same behaviour. After turning it on and just observing the hardware monitor in the BIOS the temperature slowly rises from 40°C to over 95°C in appx 5 min. I am running the newest BIOS and a lightweight Linux OPENELEC OS with XBMC on it. Now I am wondering if it could be a faulty heat sensor in the Atom. Recommended running temperature is up to 85°C, but I have not detected any performance hits when running at the above mentioned 105°C and there seem to be no software faults. How can an Atom with an attached heat sink and a fan running at full capacity even get this hot in the first place at 0 load? Aren't those things designed to generate virtually no heat? Could it be a faulty heat sensor? What shall I try to fix this? I would prefer not to damage the CPU, since it is hard fused into the motherboard and cannot be replaced. I could remove the heat pipe/heat sink, but it is getting hot, so heat is properly transferring from the CPU to the heat pipe, the fan is running at full capacity, is recently oiled and warm air is making it out of the exhaust. Edit: One more note: The North-bridge (or whatever it is called nowadays) is on the same heat pipe.

    Read the article

  • Difference between "traceroute" and "traceroute -U"

    - by AndiDog
    The manpage of traceroute says that the "-U" parameter (UDP probing) is the default, but I'm getting different results every time. With "-U": traceroute -U www.univ-paris1.fr traceroute to www.univ-paris1.fr (193.55.96.121), 30 hops max, 60 byte packets [...] 13 rap-vl165-te3-2-jussieu-rtr-021.noc.renater.fr (193.51.181.101) 59.445 ms 56.924 ms 56.651 ms [...] 18 * paris1web.univ-paris1.fr (193.55.96.121) 23.797 ms 23.603 ms but the normal traceroute gives me another result (never reaches the final node) - it's either "!X" or just exits after the maximum of 30 hops: traceroute www.univ-paris1.fr traceroute to www.univ-paris1.fr (193.55.96.121), 30 hops max, 60 byte packets [...] 11 te1-1-paris1-rtr-021.noc.renater.fr (193.51.189.38) 28.147 ms 28.250 ms 28.538 ms [... non-responding nodes ...] 28 site-1.03-jussieu.rap.prd.fr (195.221.126.58) 85.941 ms !X * * Note: I tried this very often and always get the same results. The path in my local network is always the same. So what does the "-U" parameter actually change here? I'm especially interested what the reason for "!X" could be (communication administratively prohibited). EDIT: If that helps, paris-traceroute gives me the following for the last hop: 14 P(1, 6) site-1.03-jussieu.rap.prd.fr (195.221.126.58) 34.938 ms !5 !T2 which means that node discards the packet with TTL=2 and returns an unknown message (not "destination unreachable" or the like).

    Read the article

  • Updated XAMPP with MySQL, all my tables are missing

    - by user371699
    I just updated XAMPP to a newer version, which included updating MySQL from 5.5 to 5.6. Using phpMyAdmin, however, all of my tables within my databases still appear on the left navigation panel, but the main window shows that all my databases are empty (except for information_schema, and a couple other default tables.) Clicking on a table in the navigation panel gives me a "table doesn't exist" message. It does looks like information_schema.tables doesn't have my tables, either. Can anyone assist me with this? I did make a complete backup of all my databases before the upgrade, but I first want to see if I can fix this the "normal" way. Furthermore, I'm not sure if the MySQL upgrade involved making changes to the information/performance databases, so I don't know if I can restore the old ones. Thank you. EDIT: Continuing my searching, I realized that only the INNODB databases are missing. I've tried running the following with no avail: /opt/lampp/bin $ sudo ./mysql_install_db --basedir=/opt/lampp and /opt/lampp/bin $ sudo ./mysql_install_db --basedir=/opt/lampp --datadir=/opt/lampp/var/mysql The my.cnf file in /opt/lampp/etc contains the following InnoDB settings: innodb_data_home_dir = /opt/lampp/var/mysql/ innodb_data_file_path = ibdata1:10M:autoextend innodb_log_group_home_dir = /opt/lampp/var/mysql/ # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 16M # Deprecated in 5.6 #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size innodb_log_file_size = 5M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 50 What could possibly be wrong? Why is the information_schema not updating correctly? It looks like /opt/lampp/var/mysql has all my tables in it within the database directories, but they're still not showing up in information_schema.

    Read the article

  • Archive software for big files and fast index

    - by AkiRoss
    I'm currently using tar for archiving some files. Problem is: archives are pretty big, contains many data and tar is very slow when listing and extracting. I often need to extract single files or folders from the archive, but I don't currently have an external index of files. So, is there an alternative for Linux, allowing me to build uncompressed archive files, preserving the file attributes AND having fast access list table? I'm talking about archives of 10 to 100 GB, and it's pretty impractical to wait several minutes to access a single file. Anyway, any trick to solve this problem is welcome (but single archives are non-optional, so no rsync or similar). Thanks in advance! EDIT: I'm not compressing archives, and using tar I think they are too slow. To be precise about "slow", I'd like that: listing archive content should take time linear in files count inside the archive, but with very little constant (e.g. if a list of all the files is included at the head of the archive, it could be very fast). extraction of a target file/directory should (filesystem premitting) take time linear with the target size (e.g. if I'm extracting a 2MB PDF file in a 40GB directory, I'd really like it to take less than few minutes... If not seconds). Of course, this is just my idea and not a requirement. I guess such performances could be achievable if the archive contained an index of all the files with respective offset and such index is well organized (e.g. tree structure).

    Read the article

  • Linux Mint 13 is not booting on dual boot computer

    - by Brian
    thanks in advance for your time. I have 2 hard drives in my computer a 300 GB drive which is my primary drive for windows 7 and a 1.5 TB drive that I'd used for storage. When I got it I partitioned 500 GB for use in Linux. So, I created a bootable USB and clicked the "Install by Current Operating System" option from Mint. It installed it to the free 500 GB like I'd hoped it would. Now, I can't get it to boot though. I've tried using EasyBCD to create the boot entry and it hangs on a black screen. Thanks. EDIT @ Ryhuk It presents a menu with two options 1) Windows and 2) Mint. This was a menu I created with easyBCD. When I select option 1 it boots to windows fine. When I select option 2 it hangs on a black screen with just a white bar flashing (Can't remember what its called, it marks the current cursor location on a text field) and won't respond to any key presses but alt ctrl del.

    Read the article

  • Any way to bring my laptop battery back to life?

    - by Josh
    Recently my laptop battery will get extremely hot (definitely hotter than it should get) when I charge it. After that I usually end up removing it once it's fully charged to let it cool down, which takes a couple hours... Question is, is my battery dead? My last battery I had that died just ended up lasting 2 - 3 minutes on battery, no weird heat issues. And is there any way to possibly fix this? Probably not but I won't be able to get a replacement anytime soon. UPDATE: A few days ago when this happened and it cooled down, assuming it was fully charged, I ran my laptop on battery, and the battery life lasted about 10 minutes and then the laptop shutdown. I then plugged it in later and charged it back up, and for a while I had a orange light blinking on my laptop - which I assumed meant the battery was dead, especially since I got 10 minutes battery life. Then today, I turned my laptop on and was surprised to see that the battery was at 20% and charging (it's been plugged in since the incident above, so it should have been fully charged when I shut it off) I let it charge up, and as usual it got pretty hot around the time it was fully charged. So I turned my laptop off and pulled the battery out to let it cool down Now the thing is, just now I tried running it on battery, and it's been going for an hour now... so maybe its not dead? (also the orange light is no longer blinking...) Thanks in advance if anyone knows whats going on, and how to fix it, if its fixable =] EDIT: Some info if it helps... my laptop is about 2 years ago, and it's an Asus K50ID. I know laptop batteries usually don't last more than a year but I'm trying to keep this one going for as long as I can.

    Read the article

  • USB mouse disconnecting and reconnecting in windows and linux

    - by Kalak
    I have a problem similar to what is described at "Why is my USB mouse disconnecting and reconnecting randomly and often?" except is is happening in both Windows 7 and Linux (Ubuntu 12/04TLS, fully patched), multiple mice, multiple OSs. It stops responding to input for about 3-5 seconds, then starts responding again. It's more frequent and lasts longer when running games (TF2, L4D, Dishonored, Borderlands 2, and more), but happens when just running the OS as well. I was hoping it was the motherboard so I bought a USB 2.0 PCI card to try that, but it's still happening. I've stripped it down to just the keyboard and mouse (different keyboard too just in case the keyboard was the problem), but it's still happening. All the hardware (mice and keyboards) work fine on other computers. I have literally pulled the mouse and keyboard out and plugged them into another computer (laptop) and re-joined the online game and had no problem with the keyboard and mouse combo that just failed on my gaming rig. Please no driver / Windows or Linux only suggestions, as that wouldn't effect both OSs. edit: known good mice I've brought home are now going bad. I suspect the hardware is messed up (voltage?) and has been frying the mice.

    Read the article

  • can not connect to SQL running on amazon ec2 machine

    - by njj56
    I am using SQL managment studio 2008 running on an Amazon EC2 machine. I am unable to connect to the database in my asp.net application. The EC2 instance has been set to accept connections over the SQL port. I am also able to remote the machine as well as view websites hosted on the server. Listed below is part of the connection string relating to this instance. When the program is ran and this connection string is called, it returns tcp error 0 - no return response. it just times out. <add name="ProjectServer" connectionString="Data Source=*IP ADDRESS HERE*,1433;Initial Catalog=*Catalog Name*;User ID=IP-0A6ED514\Administrator;"/> I removed the ip and the catalog name for the example, but I am sure they are correct. The only thing that I could think may cause an error, is the differences in names between the user id and the server name - the server name is ip-0A6ED514\sharepoint but the user name is ip-0A6ED514\administrator when I log into the sql server manager on the EC2 instance. A password is not used. Not sure if I would need to leave in a blank string for password - also not sure if the difference between server name and user id to log in makes a difference. Any help is appreciated. Thank you. update - when this connection string is used with out the port, i get tcp provider error 40 - when the port is in there, i get error 0 edit- the sql server is using windows authentication - does this make a difference? Usually I always use SQL server authentication

    Read the article

  • How do I diagnose a bottleneck in an Intel Atom based Ubuntu server?

    - by Jon Cage
    I have a small media server at home which has software raid and a gigabit link to the rest of my network. For some reason though, I only get ~10MB/s transfers when copying to/from the server. I use software RAID5 (mdadm) over 4 1TB disks. On top of that I then use LVM to give me a huge pool of disk space which is then split up into multiple partitions which can be resized as and when they need it. I'm guessing this it most likely the cause, but I'd like to know for sure where the root cause is. So, how can I benchmark network throughput (Windows 7 desktop <- Ubuntu server) and hard disk performance to try and identify where my bottleneck might be? [Edit] If anyone's interested, the motherboard is an Intel Desktop Board D945GCLF2. So that's a 300 series Atom processor with the Intel® 945GC Express Chipset [Edit2] I feel like such a fool! I just checked my desktop and I had the slower of the two onboard NICs plugged in so the server is probably not at fault here. Transferring a copy of ubuntu off the server I get ~35-40MB/s according to Windows 7. I'll do those HD tests when I get a chance though (just for completeness).

    Read the article

  • Configure an Azure VM for Dynamic DNS for Cloud Services

    - by Adam
    I am trying to setup an azure VM with proper DNS to allow multiple cloud services to communicate across cloud service boundaries. As I understand it, I need to provide my own DNS server. I do not have any on-premise infrastructure, so I am trying to configure an Azure VM to act as my DNS. This SO question (http://stackoverflow.com/questions/21858926/azure-how-to-connect-one-cloud-service-with-other-in-one-virtual-network) is very similar to my setup. This article (http://msdn.microsoft.com/en-us/library/windowsazure/jj156088.aspx) describes my particular case: Name resolution between virtual machines and role instances located in the same virtual network, but different cloud services Here is what I have done: Created Azure Virtual Network and declared subnets for each of my cloud services. Created an Azure VM (Windows 2012 R2) with DNS enabled RDP to the VM and enabled the DNS role and installed features Added the appropriate NetworkConfiguration xml section to each of my cloud services .csfg files Re-deployed my cloud services I have verified that I setup the virtual network and networkconfiguration properly because my cloud service hosts are able to communicate with each other if I use the internal ips. However, name resolution doesn't appear to be working, and it doesn't appear that my cloud service roles can communicate with my DNS server. How do I configure my VM so that my different cloud services roles register themselves with my DNS server? EDIT: I think I am 1 step closer to getting this to work. The cloud services that I was using are in an old affinity group which is not supported by VMs, so I was unable to add my VM into my virtual network. I created a new VNET in a new affinity group with my VM added into it. However, I still don't know how to configure the azure VM's DNS server so that the cloud services register themselves for name resolution. Also, an added bonus guaranteed to get a +1 would be to explain if it is possible to register a DNS entry for the VIP for an internal endpoint of my cloud services so we can get load balancing. Thanks!

    Read the article

  • DNS Resolution doesn't work after uninstalling Cisco VPN & Deterministic Network Enhancer in Win 7

    - by Craig M
    I just upgraded my home PC to Windows 7 Ultimate 32 bit. After trying various methods to get the Cisco VPN client to work, I gave up and decided to just run it in XP mode. The last steps I tried were in this article ( http://social.technet.microsoft.com/Forums/en-US/w7itproappcompat/thread/d880dfe5-7f44-4955-8620-2a9355d8ea8b/ ) After that, I uninstalled the Cisco client and rebooted. I uninstalled the Deterministic Network Enhancer and rebooted again. Both uninstalled successfully, but now I'm not able to resolve any DNS. The only way I can resolve DNS is to reinstall the DNE, reboot, and uninstall the DNE. Then I am able to resolve DNS lookups until I reboot again. Once it's rebooted, no more DNS. Any ideas? Edit: I completely forgot I'd asked this question until harrymc posted his answer. I've since found out that to fix this problem, I need to disable my Local Area Connection and re-enable it. Once I do that I have no trouble making network connections until the next time I reboot at which point I repeat the process. It's annoying, but manageable since I reboot very infrequently.

    Read the article

  • Data recovery on a data HDD (no OS)

    - by aCuria
    I am helping a family member with a dead hard disk. It is a seagate 200Gb 3.5" HDD in one of those old-school external enclosures. The problem was that windows failed to detect the hard disk when plugged in through USB. I removed the hard disk from its enclosure, and plugged it into my desktop PC. The BIOS does detect it upon POST, but unfortunately windows 7 would refuse to boot. It will get stuck on the loading screen with the glowing windows logo. Safe mode doesn't help either. What options do I have before going for some professional data recovery? edit: Someone modified the Title to something completely different from what I was asking, i just changed it back. 1) 2 HDD drives, DiskA(Dead), DiskB(my OS disk) 2) when B is connected to my system, everything works fine 3) when A AND B is connected, failure to boot. POSTs fine, but windows wont load 4) A has NO OS, its PURE data. It came from an EXTERNAL HDD enclosure which doesnt belong to me, and im trying to do data recovery.

    Read the article

  • Configuring vlans on Cisco SG200 series switch with Ubuntu server

    - by nixnotwin
    I created a vlan on Ubuntu with vconfig tool with 21 as id and eth1 as the host port. I connected eth1 to one of the ports on the swtich (GE23) as all ports trunk by default. In the webgui I created a vlan named test with the id 21 and I made GE2 are port as an access port. In port to vlan mapping I selected vlan 21 and added it port GE2 by selecting untagged option. I have assigned 192.168.1.1/24 as the ip of eth1.21 on Ubuntu. If I connect another cleint pc to GE2 port with a ip of 192.168.1.2/24 I cannot ping the server ip (192.168.1.1/24). Ping from server to client also does not work. I inspected packets that are sent out eth1 on the server and I could see the vlan 21 tag. And I connect the other end of the cable to a different Linux pc and inspected the packets but no vlan tags can be seen. What could be preventing me from getting vlans working? Edit 1 screenshots:

    Read the article

  • scp No such file or directory

    - by Joe
    I've a confusing question for which superuser doesn't seem to have a good answer, and neither google. I'm trying to scp a file from a remote server to my local machine. The command is this scp user@server:/path/to/source/file.gz /path/to/destination The error I get is: scp: /path/to/source/file.gz: No such file or directory user is my username on the server. The command syntax appears fine to me. ssh works fine and I can cd to the file and it doesn't seem to be an access control issue? Thanks; Edit: Thank you John. I spotted the issue. ls returned this: -r--r--r-- 1 nobody users 168967171 Mar 10 2009 /path/to/source/file.gz So, the file was on a read-only file system and user is able to read it but not scp. I just copied the file to a different directory and chown it and worked fine. It would be good if someone can explain why this is the case though.

    Read the article

  • How can I debug user mode driver failures in Windows 8

    - by Tom
    I have a 32 GB SD Card. Whenever I insert this card in to my newly upgraded Windows 8 laptop the OS stops responding normally. Metro Apps won't work. The system may or may not log in. Desktop apps may or may not be able to do things. When I remove the card and restart then all is fine. As soon as I put the card back in, the system starts misbehaving again. I've run Windows Update, so I have the latest drivers from Microsoft. This does not occur with the 8 GB cards I have. Unfortunately I only have one 32 GB card, so I can't test with others. From examining the system event log I've determined this is happening due to a user mode driver failure. How can I best debug this issue from here? How can I figure out which driver this is related to? Will there be a Dr. Watson crash dump somewhere? Details - System - Provider [ Name] Microsoft-Windows-DriverFrameworks-UserMode [ Guid] {2E35AAEB-857F-4BEB-A418-2E6C0E54D988} EventID 10110 Version 1 Level 1 Task 64 Opcode 0 Keywords 0x2000000000000000 - TimeCreated [ SystemTime] 2012-10-29T00:51:57.532718300Z EventRecordID 40417 Correlation - Execution [ ProcessID] 1056 [ ThreadID] 3796 Channel System Computer thebrain - Security [ UserID] S-1-5-18 - UserData - UMDFHostProblem [ lifetime] {811E3DC4-FBC6-420B-ABCC-AD7505A36F3B} - Problem [ code] 3 [ detectedBy] 2 ExitCode 3 - Operation [ code] 259 Message 72448 Status 4294967295 Edit 1 So I tried using Debug View from SysInternals (you can get it here: http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx). That gave me this information: which is not especially helpful. Then I tried connecting WinDbg to WUDFHost.exe (the process that seems to host user mode drivers) to see if it could catch the error. Get it here: http://msdn.microsoft.com/en-US/windows/hardware/hh852363 Instructions: http://msdn.microsoft.com/en-US/library/windows/hardware/ff554716(v=vs.85).aspx That didn't help much. It didn't catch any exceptions as I'd hoped (which would point me to the cause of the crash at least). Here's the stack of one of the threads:

    Read the article

< Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >