Search Results

Search found 9715 results on 389 pages for 'bad passwords'.

Page 300/389 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • IP Masquerade and forwarding

    - by poelinca
    Hi all , i got a dedicated server running ubuntu server 10.10 with 3 ip adresses on the same eth card ( example: eth0 192.168.0.1 , eth0:0 188.78.45.0 , eth0:1 ... ) with a 3 virtual machines running ( virtualization technologi used is lxc but i don't think this matters too much ) . Now i need to redirect all ports opened ( using ufw to close/open ports ) from the ip 188.78.54.0 ( eth0:0 ) to a virtual machine ip ( let's say for example 192.168.2.3 ) , all requests made by a virtual machine should be redirected back to the virtual machine that made the request ( in this example 192.168.2.3 ) . Lets say the second vm has the ip 192.168.2.4 now i need to redirect all opened ports to from eth0:1 to this ip and viceversa . And so on and so on , what are the iptables/ufw rules to get this done ? and where to save them ( witch config file ) so they stay the same after reboot . In a few words redirect all requests comming from/to eth0:0 to a certan ip , all requests comming from/to eth0:1 to another ip ... Remember i'm saying all ports opened becouse they might be dynamicly changed . p.s. please excuse my bad english

    Read the article

  • Tuning up a MySQL server

    - by NinjaCat
    I inherited a mysql server, and so I've started with running the MySQLTuner.pl script. I am not a MySQL expert but I can see that there is definitely a mess here. I'm not looking to go after every single thing that needs fixing and tuning, but I do want to grab the major, low hanging fruit. Total Memory on the system is: 512MB. Yes, I know it's low, but it's what we have for the time being. Here's what the script had to say: General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Your applications are not closing MySQL connections properly Variables to adjust: query_cache_limit (> 1M, or use smaller result sets) tmp_table_size (> 16M) max_heap_table_size (> 16M) table_cache (> 64) innodb_buffer_pool_size (>= 326M) For the variables that it recommends that I adjust, I don't even see most of them in the mysql.cnf file. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_buffer_pool_size = 220M innodb_flush_log_at_trx_commit = 2 innodb_file_per_table = 1 innodb_thread_concurrency = 32 skip-locking big-tables max_connections = 50 innodb_lock_wait_timeout = 600 slave_transaction_retries = 10 innodb_table_locks = 0 innodb_additional_mem_pool_size = 20M user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = localhost key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 4 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M skip-locking innodb_file_per_table = 1 big-tables [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/

    Read the article

  • Advice for UPC/Surge Protector in home office

    - by Fred
    I'm just starting out as an independant developer, mostly Unix stuff with some Windows thrown in occasionally. I've been running two machines, a linux and a windows dev machine. Long story short, we had a bad storm come through last week and I unplugged one machine, forgot to unplug the other and the p/s and mobo ended up dead. Luckily I backup to an external service religiously (rsync.net for anyone interested), so there was no loss in data, but it did show me a glaring hole in my current setup, namely, lack of UPC and Surge Protection (this has honestly never been an issue before). Can anyone recommend a UPC/Surge Protector for a home office? It only needs to support a single machine (I opted to use vmware instead of rebuilding that machine), but it's a quad core Phenom 2 with a 1k watt p/s. This is outside my experience so I thought I'd get some input from others. I'm looking for something that's reasonably priced and does the job reasonably well. I don't need absolute 100% uptime, just something to protect my PC better than it is now.

    Read the article

  • Revamping an old and unstable office IT-solution using Windows Server and OpenVPN

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • How can I create two partitions and clone one to the other (using Clonezilla)?

    - by johnny
    I was hoping someone could help. I want to create a "backup" partition. I want to create two partitions on my drive. One is a good install, which I want to then use clonezilla to copy the good partition to the broken/unused partition and have the restored partition boot up as usual. Example, C: goes bad. D: is a "good" copy of C. C gets a corrupt registry. I restore D to C, C will then boot up as usual. So, I need to do the clone with Clonezilla and the restore with the same. I see the part_...clone and restore. Will this do it? How do I get the partitions? EDIT: I am using XP. How can I do this? Also, I know this is not the best thing for all occasions. I have a offline backup as well. I would like to have both. Thanks for any help. I'm using Clonezilla if it matters.

    Read the article

  • Clone a Windows Installation to a 3TB Hard Drive; MBR to GPT

    - by DanBlakemore
    I have Windows 7 Professional 64-bit installed on my desktop. Unfortunately for me and my wallet my hard drive is failing. I have purchased a 3TB hard drive as a replacement for my current 2TB drive. I would like to avoid as much hassle as possible in moving to this new drive so I would like to copy my current partition to the new drive using Gparted. The problem is that I suspect that my current partition is MBR, and I need GPT on my new drive since it is 3TB. Can I simply copy the MBR partition onto the new disk and then convert it to GPT after the fact (can you even convert the type of a partition)? Or would I need to somehow copy the contents of the partition into a GPT partition on the new drive? How do I go about making this transistion? Also, are there any issues I should be wary of booting to a GPT partition? If it matters, my motherboard is 1 year old as of May, 2012. Edit: My motherboard is 1 day old. My old one does not have UEFI compatibility, so I decided to make an upgrade to Intel today given that I would need a UEFI motherboard to use my new HDD. How much can I use a dying hard drive (bad sectors according to Hitachi Drive Fitness Test)? I have assumed not at all, to be safe.

    Read the article

  • Remote offscreen rendering

    - by redmoskito
    My research lab recently added a server that has a beefy NVIDIA graphics card, which we would like to use to do scientific computations. Since it isn't a workstation, we'll have to run our jobs remotely, over an ssh connection. Most of our applications require doing opengl rendering to an offscreen buffer, then doing image analysis on the result in CUDA. My initial investigation suggests that X11 forwarding is a bad idea, because opengl rendering will occur on the client machine (or rather the X11 server--what a confusing naming convention!) and will suffer network bottlenecks when sending our massive textures. We will never need to display the output, so it seems like X11 forwarding shouldn't be necessary, but Opengl needs the $DISPLAY to be set to something valid or our applications won't run. I'm sure render farms exist that do this, but how is it accomplished? I think this is probably a simple X11 configuration issue, but I'm too unfamiliar with it to know where to start. We're running Ubuntu server 10.04, with no gdm, gnome, etc installed. However, xserver-xorg package is installed.

    Read the article

  • Euro character messed up during FTP transfer

    - by djechelon
    My customer is using a very outdated ecommerce management system on my hosting service. For that product, no support is being provided anymore by the vendor. Brief explanation: the shop website, that claims to run under LAMP stack, is built by an old Visual Basic Windows application running on MS Access. The user constructs the shop, defines the HTML template, adds products and categories, etc. Then the VB exe builds the PHP pages (one for each template page) and the SQL script to run on MySQL. It also uploads everything via FTP and runs the installation/upgrade script on its own. The problem Browsing the website, many products' descriptions are cut before the euro sign. For example, what was supposed to be "Product price €1000" becomes "Product price" The analysis MySQL contains a cutted description until the € sign, so it's not PHP fault The Access databases contain full description with € sign, so it's not fault of the webmaster writing bad description or eDisplay cutting them The SQL that will run once the site gets uploaded, stored on my local machine before upload, contains the € sign The same script, after being FTPed by eDisplay and opened with nano from SSH, shows the € sign messed up like this: ^À vsftpd log reports (obfuscated for privacy) Sat Dec 15 11:16:57 2012 22 xxx.xxx.128.13 1112727 /srv/www/domains/xxxxxx.it/htdocs/db.sql b _ i r xxxxxxx ftp 0 * c which seems to be a binary transfer (and also a huge security vulnerability because you can download the whole database from unauthenticated HTTP) The eDisplay internal FTP client provides no option for ascii/binary transfer modes [Add] Trying to manually upload the SQL file via SFTP shows messing up euro [Add2] Trying to manually upload using Xftp client with explicit ASCII mode doesn't fix too It looks like the file gets uploaded as binary. Perhaps on the customer's previous host it all worked fine because that was a Windows host. The server It's an Azure virtual machine running openSUSE 12.2 with both vsftpd and openSSH The question Without asking the customer to manually upload files using FileZilla or replacing € with &euro;, because he refuses, what can I do on server side to prevent vsftpd to screw up euro sign?

    Read the article

  • is there a man in the middle attacking to my server machine?

    - by GongT
    My server works well about half a year. But a strange thing happened (several hours before). This server has two IP-address 58.17.85.19 & 117.21.178.19 When I navigate to http://58.17.85.19, nothing different as before. But http://117.21.178.19 will return a "302 Object moved" and become a "redirect loop" I do some test: ($cmd = "wget http://117.21.178.19/?xx=$RANDOM --max-redirect 0 -S --no-cache -O -") Step by step: run $cmd on my PC and my firend's one (we live in two side of China, far away). - got 302 run $cmd on this server - got 200 OK (content is correct result of index.php) run $cmd on another server in same computer room - got 200 OK telnet from my PC and build an HTTP request (type by hand) - got 200 OK shutdown php-fpm, run $cmd on my PC - got 302 run $cmd on server - 502 Bad Gateway shutdown nginx, run $cmd on both the server and my PC - Connection refused. create iptables rule, refuse any connection to 58.17.85.19:80. run nc -l 80 -k -vvv on server and run $cmd on my PC NC show me that.... Server accept connection (Connection from [my ip]) My connection closed ! (Remove fd xx from list) wget dump out response - got 302 I know that, normaly, NC will accept connection, then dump HTTP request from client, and client will wait for response. this connection will open forever(infact client will close connection becouse timeout), becouse NC can't give any response. So... where my request gone? who send an response to the client? some virus on my server system? If so, why 58.17.85.19 didn't has this error? or... I was attacked by a middleman?

    Read the article

  • Windows Vista freezes

    - by Kakurady
    Windows Vista (32-bit) would randomly freeze on my computer, usually 15-30 minutes after login but can happen just after login. All applications would stop responding and the hard drive will not make any sound, and after a while, the mouse cursor will also stop moving. I dual-boot Ubuntu, and that still works fine. It started with the computer freezing when loading Team Fortress 2. Alt-Tab and Ctrl-Alt-Del have no effect, and the hard drive does not make any sound. I tried to verify the game data using Steam and that freezes the computer too. So I stupidly reinstalled the game. Now the game doesn't freeze when it starts, but instead the whole computer randomly freezes. This computer is a Dell XPS M1530 with a 320GB (298GiB) drive (WDC WD3200BEVT-7) split 5-ways, with Windows and Linux a partition each, one more for Linux swap space, and another two partitions for Dell diagnostic program and factory image and drivers. There was once where the hard drive would make clicking noises all day, and only stopped when I rebooted the computer. Since then, the BIOS diagnostics would fail the drive (for "self-test log contains previous errors") whenever ran. (The on-disk diagnostics cannot be run because I overwrote the MBR with GRUB.) Naturally, I thought the hard drive could be the problem. CHKDSK found one bad sector, but this seems to have no effect. System File Checker found two protected files with wrong hashes, one is some kind of IE manifest, and the other is a tcpmon.ini. Neither of them can be restored because their back up copy also have wrong hashes. Nothing about system failures in the event viewer. What should I do next?

    Read the article

  • Local user vs. domain user? What is the right way here?

    - by ebeeb
    I'm a software developer in a company with 6 employees. Everyone has a machine for him-/herself, so none of the machines is shared. I'm currently setting up my machine with Windows 8 and was experimenting a bit with domain and local user accounts. Correct me please if I'm wrong, but I think the idea behind is, that domain users generally should not be able to modify the configuration of a machine (like installing software), since they are able to login on every single machine in the domain. The local user (usually just one local administrator per machine) is the one who cares about the configuration of the machine. But in my case the login into the domain is just for being able to access directories/servers in the domain (I do not really know the details, all I know is, that loggin into the domain user account is necessary). So overall I've got a local admin account and a domain account used on my machine. While working I'm logged in to my domain user account. But it annoys me, that I always need to enter the credentials of my local user account when I'm about to update/install something, which happens quite often as a software developer. I fixed this with adding the domain account into the user accounts in my control panel and putting it into the Administrators group. The only thing I wanted to know about this: is there something REALLY bad about doing this? Or is there maybe a more common way to be able to act like a local admin, while logged in as a domain user? PS: I'm sorry about the tags, but I don't know the proper ones. I'd be glad if some of the superuser experts could fix this :-)

    Read the article

  • Heavy write to Galera cluster - table locked, cluster practically unusable

    - by Joe
    I set up Galera Cluster on 3 nodes. It works perfectly for reading data. I have done simple application to make some test on the cluster. Unfortunately I have to say that the Cluster fails totally when I try to do some writing. Maybe it can be configured differently or I do sth wrong? I have a simple stored procedure: CREATE PROCEDURE testproc(IN p_idWorker INTEGER) BEGIN DECLARE t_id INT DEFAULT -1; DECLARE t_counter INT ; UPDATE test SET idWorker = p_idWorker WHERE counter = 0 AND idWorker IS NULL limit 1; SELECT id FROM test WHERE idWorker = p_idWorker LIMIT 1 INTO t_id; SELECT ABS(MAX(counter)/MIN(counter)) FROM TEST INTO t_counter; SELECT COUNT(*) FROM test WHERE counter = 0 INTO t_counter; IF t_id >= 0 THEN UPDATE test SET counter = counter + 1 WHERE id = t_id; UPDATE test SET idWorker = NULL WHERE id = t_id; SELECT t_counter AS res; ELSE SELECT 'end' AS res; END IF; END $$ Now my simple C# application creates for example 3 MySQL clients in separate threads and each one executes the procedure every 100ms until there is no record where column 'counter' = 0. Unfortunately - after about 10 seconds sth is going bad. On servers there is process 'query_end' that never ends. After that - you cannot make update on the test table, MySQL returns: ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction . You cant even restart mysql. What you can do is to restart server, sometimes whole cluster. Is Galera Cluster so unreliable when you do massive concucurrent writing/updates? Hard to believe.

    Read the article

  • What can be done to improve time synchronization on networks with sporadic internet access?

    - by anregen
    I'm looking for advice setting up time servers for a very non-typical network. I support many closed networks that have occasional access to the internet. A network would get access most days for a few hours, but would frequently go 1-3 weeks blacked-out. The computers/servers on this network are mostly *nix-based, but not all the same flavor. The entire network is mobile, so when it connects, it will have very different hops/latency to internet time servers. The servers on the closed network are powered-off frequently (at least daily). Right now, my gut tells me to use NTP (because I hate re-learning all the stuff that someone else already got working pretty well). But I have several issues, and am looking for someone with experience in this type of strange situation. I currently have no solution in place, I'm simply letting the internal clocks drift. This results in errors of ~600s in a majority of networks. I have seen mismatch worse than 10,000s. Is there something "better" than NTP in this situation? I know NTP likes to have very frequent, consistent access to servers that give nearly identical answers. I won't have that. How many internal NTP servers should I configure, so that during periods of internet blackout, I have internal time that is consistent within the closed network? There is no human access. No matter how large the mismatch, the server(s) must attempt to correct itself. Discrete steps are very bad. No matter how large the mismatch, the correction must be "slewed", not "stepped". I understand that this could take many hours to correct.

    Read the article

  • Did my hard drive fail or is it something else?

    - by Julian
    Last night while I was watched a movie on my laptop the external monitor just went blank and the built-in display froze. Weird I thought, so I restarted it only to be greeted with this heart-breaking message. "No Operating System Found". After a few panicked restarts I accepted the fact that my hard drive might be done :(. Being the resourceful technie that I am, I whipped out Ubuntu Live on my old Flash Drive and was up and running before day break. I cannot access the hard drive through Ubuntu (which I expected) but I also cannot access my DVD drive either! This got me thinking that it might not be the hard drive and some other component that they hdd and the dvd uses. Hopefully this is the case. Which component is the most likely culprit? What tools can I use from Ubuntu Live on my USB flash drive to find out? I'm in a bad place without my hdd, thanks in advance for any assistance provided! P.S. My laptop makes a weird noise when I try to access or eject my DVD within the slot. Also my HDD makes a weird noise sometimes. Not sure how to describe it. System Specs: Dell 1558

    Read the article

  • Computer won't start up. Stuck on Lenovo splash screen. Help Diagnose

    - by Ace Legend
    I have some (I'm not sure exactly what model) Lenovo 21" IdeaCentre. Honestly, the computer works off and on. I have had problems with it not being able to shutdown, which I fixed. The fan seems to be constantly running, a few other problems as well. Anyway, nobody was using it when all of a sudden it switched to a blue screen. I was in the kitchen, but when I got over to the computer I read the message. It said something about bad drivers, but that is all I saw and then it restarted. However, when it got to the Lenovo Splash screen, nothing happened. I waited there for over 10 minutes, but still nothing. I tried to turn of the computer, but the only way to do it was to pull out the power cable. I then removed all USB devices and tried again. Still nothing. It also won't respond to keyboard input when I try to use enter to interrupt normal startup. My guess is some piece of hardware is damaged inside the machine. However, I have no idea what piece it is. Does anybody have any idea what could be wrong with it? Thanks.

    Read the article

  • virtual host settings fail on multiple sites

    - by Ricalsin
    Wow. I'm puzzled. On my ubuntu system I've setup an apache2 server and configured three virtual hosts in the /etc/apache2/sites-available directory. a2ensite to symbolic link the sites-enabled. The first two work great; a simple url of localhost.mysitenames.com works great for the first two sites, both finding their DocumentRoot and Directory paths. The third always generates a Bad Request (Invalid Hostname) response. No server error.log as it never hits it. I've copied/pasted the working vhost files, made the minor changes to the ServerName, DocumentRoot and Directory and the same problem persists. I always "sudo /etc/init.d/apache2 restart" whenever I make a change. I've cleared the browser cache as well. No love. There's not a limit to the number of sites you can host, right? My goal was a localhost development environment with the expectation I can run any number of websites locally before pushing them to a live server. Any thoughts on how to debug this? Or, just a simple solution I am missing?

    Read the article

  • mod_rewrite changes case even if not matching RewriteCond?

    - by kirdie
    I have a really strange problem with my MediaWiki which I want to have articles of the form mywiki.org/MyArticle. Now I got most of it to work using the following code but it mysteriously cannot display the logo anymore. RewriteEngine On # don't rewrite valid requests to files and directories RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} !-f RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} !-d # mywiki.org/MyArticle gets rewritten to mywiki.org/index.php/MyArticle RewriteRule ^/(.*)$ /index.php/$1 [L,QSA] Now when I type in mywiki.org/img/logo.jpg in my browser the adress changes to http://wiki.geoknow.eu/Img/logo.jpg (capital I) and I get to the empty article page but the image is definitely there (in my document root under the img folder): /var/www/mywiki.org$ ls img logo.jpg So far so bad. But now it gets really crazy: When I add RewriteCond %{REQUEST_URI} !^/.*\.jpg my adress still gets rewritten and my access log says - - [05/Dec/2012:16:30:21 +0100] "GET /Img/geoknow_logo.jpg HTTP/1.1" 404 509 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Firefox/17.0" Where does that capital I in Img come from? The rule is not even executed because at least one condition is definitely not met now and I also don't have any to lowercase-transformation defined anywhere. What is happening there and how can I repair this? P.S.: Now all of the sudden the problem went away (the image is displayed as it should and there is no capital replacement anymore. What can cause this and why does it spontanously appear and disappear?

    Read the article

  • How to resolve 'No internet connectivity issues' with a Virtualised 2008 R2 Server using Forefront UAG

    - by user684589
    I have spent some considerable time reading up on as many possible blogs and articles as I can to help me solve why my VM (Running on Hyper-V) for DirectAccess has suddenly stopped being able to access the internet. The VM setup shares the same internet connection on which I have written and submitted this question so I know that the actual underlying internet connection is fully functional. Previous to last week the DirectAccess was fully functional and had no issues. This is a recent problem which was led up to by a number of consistent crashes on the DA machine when access was attempted. Upon reboot all seemed well until recently. I am not certain whether it is relevant, but previously to this I had a number of power issues where the entire VM host shutdown unexpectedly leaving around 8 VM's in a bad way. Upon restart, the UAG DirectAccess machine was unable to access its configuration service (although the service was started) but this seemed to relate to the Light-Weight Active Directory Service AD LDS which had a corrupted database. Having repaired this database, I restarted the service and could subsequently reconnect to the configuration service again. For good measure I re-bound the network adapters (virtualised through Hyper-V) and DirectAccess claimed to be all happy again. However as it stands my machine is still unable to access the internet showing the "No internet connectivity" exclamation mark for the external facing NIC. I have also tried removing the adapters, disabling, re-enabling and the problem persists. The intranet part of the VM CorpNet seems to be fully functional as before and I'm running out of ideas. Any input would be greatly appreciated. I am not an advanced Domain Administrator so please be gentle.

    Read the article

  • USB 3.0 ports backwards compatibility problems with 2.0 devices?

    - by AaronLS
    I see some info on the net that suggest that I should be able to get my USB 2.0 devices to work in 3.0 ports. I only have two 2.0 ports on my new computer, and six 3.0 ports. I have installed drives. There's two different drivers, I guess some of the ports are supported by the intel board and some supported by some other chipset on the motherboard. I however have yet to get any of the 3.0 ports to work, and my brother had had the same issue with his devices not working in 3.0 ports on his computer. So I am beginning to wonder if the backwards compatibility isn't reliable for some reason. Maybe manufacturers opting not to implement 2.0 support on the 3.0 ports. I understand that physically the wiring is there, but that is only half the story. Beyond my brother's and my own computers (different motherboards/everything), I have yet to see a 2.0 device work in a 3.0 port. Is there any reason for this apparent device incompatibility? I.e. looking for responses that would indicate what areas to explore for issues or if there is any known cases of manufacturers deviating from spec in hardware or drivers. I am aware it's "supposed" to work :) Update: Does this have any relation to "USB Legacy Support" options in the BIOS? There several options combinations of options with "USB Legacy Support" and "USB 3.0 Legacy Support" and the description for these is a bit confusing, sounds like a bad translation.

    Read the article

  • How do you update without cutting off users?

    - by Griffin
    I searched around and I was surprised that I couldn't find an answer to this question. My assumption is that you have multiple servers. Normally they both will be doing their specific take (for the rest of this I will assume a simple website). Now lets say server A & B need updates. Do you update server A while server B keeps pushing out the webpage and then when server A is okay you update server B? This seems like it would work in small scale but seems horrible in large scale due to the fact that you'd need twice the power that you normally have. When dealing with a large number of servers do you update small sections at a time? I thought the problem with this would be if server A can't work alongside server B C D E or F any-longer that's not that bad. But when you start updating you slowly lose this small percentage. What is the proper way to deal with updates like this?

    Read the article

  • How to setup bindings for development IIS 7.5 with lot of sites

    - by Antonio Bakula
    I am a programmer in a small ASP.NET shop with very little expirience in server administration, and I have to setup IIS 7.5 to host lot of sites on newly installed windows server 2008 R2, these sites are test "clones" for sites on "real" web server and they should be accessible only in local network (domain). Developers should add new sites for our new customers. Project managers use this server to check progress and test new sites and new features, QA people have to have access to this site and test before we copy it to the "real" web server. Developers only have access to IIS console, in fact they can use RDP to test server with their developer domain credentials and permissions, also developers are local admins on that machine (tester). On our previous server I used different port numbers for each site. That worked but don't like this solution, I would prefer to use subdomains. But here are the problems: manually adding DNS records is not an option because we do not wont that developers have to administer domain DNS server, and currently this had to be done with domain administrator credentials. Is there a some way to add DNS record automatically ? I tried to add DNS record for subdomains on test server with wildcard (*.tester) and that seems to work for some time but that change coused some bad problems in our domain network and admin forbid me to mess with DNS, he said that I have to add DNS record for every subdomain manually and that I can not use wildcards, and there is nothing that I can do about it, mainly for "politicall" reasons :( obviously our admin is pretty much uncooperative, outsorced from different organization and I can't do anything about that. can I add another DNS server on that machine ? What must be setup on clients machines to "tell" them to use domain DNS server and tester domain server ? So please I need someone to give me some advice, what should I do ? Is different port numbers only option left ? Thanks !

    Read the article

  • How would I write a terminal command to download a folder with wget from a Media Temple (gs) server?

    - by racl101
    I'm trying to download a folder using wget on the Terminal (I'm usin a Mac if that matters) because my ftp client sucks and keeps timing out. It doesn't stay connected for long. So I was wondering if I could use wget to connect via ftp protocol to the server to download the directory in question. I have searched around in the internet for this and have attempted to write the command but it keeps failing. So assuming the following: ftp username is: [email protected] ftp host is: ftp.s12345.gridserver.com ftp password is: somepassword I have tried to write the command in the following ways: wget -r ftp://[email protected]:[email protected]/path/to/desired/folder/ wget -r ftp://serveradmin:[email protected]/path/to/desired/folder/ When I try the first way I get this error: Bad port number. When I try the second way I get a little further but I get this error: Resolving s12345.gridserver.com... 71.46.226.79 Connecting to s12345.gridserver.com|71.46.226.79|:21... connected. Logging in as serveradmin ... Login incorrect. What could I be doing wrong?

    Read the article

  • Server Intermittently Inaccessible Externally (but Accessible Internally Continuously)

    - by nicorellius
    I have a CRM on a server on a network. We have a static IP and another server outward facing. We use port-forwarding to map to the CRM, so that when you go to the IP or the FQDN, you get to the CRM: xxx.xxx.xxx.xxx crm.example.com Internally, we can access the CRM by going to crm or crm.example.com Lately, I've been noticing that accessing the server from outside the network times out or gives 503, bad gateway. During that time, I can also SSH (different port, so this works) into the outward facing computer and access the server just fine. I have a robot monitoring the site and indeed via HTTP monitoring the site is going down periodically. I looked through the Apache server access and error logs and nothing stuck out at me so I'm a bit confused as to what could be going on. I also searched the access logs for 503 and found nothing. When I run tracert from outside the network, it appears the packets basically make it through the wider area servers (Comcast city and county servers) and end up dropping at the CRM server's front step. I'm tempted to replace the server because it is older and underpowered but it would be nice to know what is going on. Any ideas what to do next?

    Read the article

  • Lost Windows 7 boot after EasyBCD with EFI

    - by drent
    I've got a Lenovo Y580 with a 64GB SSD and a 1TB HDD setup using GPT and setup to boot from (U?)EFI. I was trying to get my Linux Mint installation on the Windows boot manager using EasyBCD (I didn't realise EFI but it wiped my boot partition/loader and I cannot seem to get Windows back (and I still can't get a bootable Linux Mint). Using the System Recovery utility, Startup Repair can't "see" windows (it might be because I'm using a 7 Pro disk to recover Home Premium?). In command prompt, Bootrec tools don't do anything and bootsect can't run because it says that it's for BIOS only and I've booted with EFI. I can see the EFI data on the 200mb SSD partition using diskpart but I don't know how to add Windows back onto whatever bootloader I have/need. At the moment the only options I can see are: Do a fresh install of Windows and hope that the setup remains as fast as the default one (the SSD is some kind of cache for Windows but I can't quite see how it works given that the rest of the SSD is unpartitioned space). This seems like overkill given that Windows was working fine til EasyBCD deleted it. Try forcing BIOS mode and see if that somehow magically fixes things Try converting from GPT to MBR to try and use the bootrec/bootsect tools (and maybe back again) which seems like a really bad idea. Anyone have any ideas?

    Read the article

  • No partition on USB Flash Drive?

    - by Skytunnel
    A friend gave me a corrupted USB memory stick to try recovery data from. But I've had some unusual results, so thought I'd share to see if anyone is familiar with this problem... First off I just tried opening from my own PC. Windows prompted to Format the drive, which I of course declined Downloaded TestDisk to anaylsis the drive. And right away I noticed something strange, on the listed drives it comes up as Disk /dev/sdc - 6144 B - USB Flash Drive That's right, the first USB flash drive smaller than a floppy disk!? Moving on anyway... first anaylsis comes up with: Partition sector doesn't have the endmark 0xAA55 TestDisk's Quick Search gave no results, moved on to Deeper Search: No partition found or selected for recovery This left me stumped. I tired a couple of other programs with no success I did manage to get a backup image, but it was just as small as TestDisk indicated, so nothing of use on it After a few hours trying various suggestions from other sources, I gave in and just tried formatting the drive. But returned the message: Windows was unable to complete the format. From googling that, the suggestion was to delete the partition. But there is no partition to delete in this case. most recently I've tried formatting from cmd, and got this result: Format D: /FS:FAT32 The type of the file system is RAW The new file system is FAT32 Verifying 0M 11 bad sectors were encountered during the format. These sectors cannot be guaranteed to have been cleaned The volume is too small for FAT32 Anyone got any suggestions? UPDATE: As per suggestion from @Karen, I tried running a CLEAN from DISKPART, results as follows DiskPart has encountered an error: The request could not be preformed because of an I/O device error.

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >