Search Results

Search found 23783 results on 952 pages for 'edit and continue'.

Page 401/952 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • Forcing users to change password on first login - Windows Server 2008 R2 Remote Desktop Services

    - by George Durzi
    I'm setting up a demo lab environment in which each demo lab user is assigned 4 accounts to use in the lab. Users access the lab via Remote Desktop to the "client" machine in the lab - exposed at demolab.mydomain.com. The Client machine is a Windows 2008 Server R2 Enterprise Edition server The Remote Desktop Services role is configured on this server Remote Connection settings are configured to allow users to connect with any version of the Remote Desktop Client All accounts are members of the local Administrators and Remote Desktop Users groups All accounts are configured to be forced to change the default password after first login The user is instructed to remote into the lab with an account designated as their main account, and establish 3 more remote desktop sessions within the lab using their 3 other assigned demo lab accounts. When establishing the initial remote desktop connection to the lab using their main account, the user sees the change password dialog as expected. However, after logging in and trying to establish remote desktop connections to the server with their three other accounts, they are prompted that they need to change the password after logging in but can't continue with the login process - they don't see the expected change password experience. After logging in with a primary accounts, it doesn't make a difference if I try establishing a Remote Desktop connection to the environment using the name of the server, e.g. Client, or demolab.mydomain.com. I experimented with changing the settings for Remote Connections to require NLA but that didn't make a different. Appreciate any tips. Thanks

    Read the article

  • High latency due to non-presence of a transit provider in my country

    - by nixnotwin
    My ISP, a state owned incumbent, buys bandwidth from different transit providers. Whenever it buys transits it announces only a specific prefix (in most cases a hitherto unused) through the new transit AS. For e.g. if it runs out of bandwidth, it buys bandwidth from a new transit and announces a new prefix through it, while the same prefix is not announced (or announced with lowest metrics, so that the routes are very rarely used) via the old transits which continue to provide bandwidth to it. I am a business customer, so I have a fiber based link to the ISP and a tiny subnet is given to me. The subnet which is provide to me is part of a prefix which is announced by the AS of a transit who, it seems, do not have a presence in my country. So when I do a trace the packets, when they leave my ISP's AS, they take about 275ms to reach the transit providers core router, which is located in USA (half the world away). Also for upstream traffic my ISP uses a transit provider (tier 1) who has a presence in my country. But the return path is always through the transit which is in USA. So, average latency is 400ms. All the users of other ISPs in my country discover my subnet via USA. Even the traffic from neighboring countries, from Europe (which is much nearer) follows the path via USA. Sites using CDNs also resolve to ips in USA. I have informed the ISP NOC about the issue and I have asked them to provide an ip subnet belonging to a prefix announced by a local transit (preferably a tier 1 transit provider) and I am waiting for a reply. My question: Is it a serious issue that I must follow up to get it resolved? When I compared the latency on other providers in my country, it is, in most cases, less than half of my ISPs latency. Why my ISP doesn't announce all its prefixes to all of its transit providers, so that the packets can take efficient and nearest routes to reach prefixes that originate within its network?

    Read the article

  • configuring lighttpd for large downloads

    - by ahmedre
    i run a web site that hosts pages that are just general scripts (php, etc) and mp3 downloads (some of which are fairly large - up to 200mb). i am running lighttpd on the servers on linux (ubuntu 64). everything is fine, but under high load, the server is not accessible (or very slow - even sshing in takes a while), and i am guessing this is due to a huge number of mp3 downloads at that time. consequently, dns sees the server as down and redirects all the traffic to the other servers, and after a while, it comes back up and things work again. so what's the best way to fix this? ideally, i want the server to continue running (and the web pages - php etc - to always work, but downloads don't always have to work). should i just have 2 web servers running (one for the downloads and one for the php pages), or is it perhaps something i can fix in my lighttpd configuration? here are the snippets from my configuration: server.max-worker = 4 server.max-fds = 2048 server.max-keep-alive-requests = 4 server.max-keep-alive-idle = 4 server.stat-cache-engine = "fam" fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "64", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) # normal php site $HTTP["host"] =~ "bar.com" { server.document-root = "/usr/local/www/sites/bar.com/" accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/bar.log" } # download site $HTTP["host"] =~ "(download|stream).foo.com" { server.document-root = "/home/audio/" dir-listing.activate = "enable" dir-listing.hide-dotfiles = "enable" evasive.max-conns-per-ip = 1 evasive.silent = "enable" # connection.kbytes-per-second = 256 accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/download.log" }

    Read the article

  • Relaying to tech "support" that computer is actually broken.

    - by Sion
    First some background: I have a Dell Inspiron 15R M050, it is still under the Dell limited warranty and the Best Buy Extended warranty. I am currently dual booting Debian Squeeze and Windows 7, the only reason I go into Windows is to play video games specifically steam games. Issue: When I play my games in Windows I am capable of playing for anywhere from 5 minutes to 2 hours before I suffer a hard-lock. I cannot alt-tab, ctrl-alt-delete, ctrl-shift-escape do anything for 2-3 minutes. After this hard-lock period everything runs fine, I can continue the game for probably another hour at least before I suffer another lock. Games: Borderlands, Splinter Cell: Chaos Theory, Starcraft 2, Garrys Mod What I have tried: Running the diagnostic suite in the dell bios, restoring the OEM Windows recovery partition on the HD, fresh installing Windows 7 Professional, updating BIOS, Calling tech support and having them run a software Hardware Diagnostics suite. The question: I think from the research that I have performed that it might be a lack of thermal paste on the CPU, would I be able to go to Best Buy and have them do a hardware diagnostic from the hardware level then have them be able to tell Dell that there is a hardware issue? Or would there be a different problem?

    Read the article

  • How do I fix error 1303 during TI Connect install?

    - by smoth190
    I recently purchased a TI-84 Plus graphing calculator, and I'm trying install the TI Connect software in order to connect the calculator to my computer via the USB cable. Unfortunately, I'm getting this error while trying to install the program: Error 1303. The installation has insufficient privileges to access this directory: E:\Data\Timothy\Documents\MyTIData. The installation cannot continue. Log on as administrator or contact your system administrator. However, my account is the only account on my PC, and it has administrative privileges. I've also tried running the installer with Run as Administrator, but with no luck. If I create the folder MyTIData manually, I receive this error: Error 1317. An error occurred while attempting to create the directory: E:\Data\Timothy\Documents\MyTIData I've reapplied the security settings to the E:\Data folder (and all its sub-directories) to Full for my account. I've also gone into Computer Management, and given SYSTEM full privileges for the entire disk. I've also logged out, logged back in, restarted, etc. but still, no luck. Now, I should mention that my Documents folder is not at the default location. I changed it due to my C: disk being a 90GB SSD, so I moved all my personal data onto the extra storage disk (which is ~1TB). I don't know if that is causing the issue, but it can't hurt throwing it out there. So why can't I install this program? Google'ing the problem brings up this error for various other installers (such as Visual Studio and Microsoft Office), but nothing for TI Connect. All the solutions are the same: Give the folder Full privileges...but I've already done this! I've also tried running the installer with and without the calculator plugged in, but it didn't change anything. In the prompt that contains the error, repeatedly clicking Retry or waiting a few moments before clicking Retry also produces no result.

    Read the article

  • Linux wireless disconnect every 20 minutes

    - by james
    My laptop uses CentOS 6.3 with kernel 2.6.32-279.el6.x86_64. My wireless adaptor is Intel Corporation Centrino Wireless-N 1000. My wireless connection always get off after about 20 minutes. The network applet shows the connection is still on with good signal strength, but I just cannot load any web pages even the configuration page of the wireless router. The problem will continue until I disable and reconnect the wireless. Other devices like my cell phone uses the same wireless network without the problem. Even yesterday I'm using the same laptop with Fedora 17 without this problem. I also searched the internet and someone said running services NetworkManager and network simultaneously may be a problem. But I cannot stop any one of them because: if I stop network and start NetworkManager, the network service will start automatically; if I stop NetworkManager and run network, it says "Device does not seem to be present, delaying initialization." when trying to bringing on the wireless. What shall I do to get rid of the problem? Thank you very much!

    Read the article

  • IP6 seems to be enabled - How do I configure it without interfering with IP4?

    - by Mister IT Guru
    I noticed that some of my Centos boxes have IP6 enabled, and seem to have addresses. I have no problem with this, but I would like to get a handle on it, and even connect to them using IP6. This would really help if for any reason DHCP has a hiccup. But I'm a bit lost as to where the configuration on my CentOS box is. (I am also on google researching this, but I like server fault! :) ) I am hoping that I would be able to log into this via the VPN because every now and then that DHCP device has a bad morning, and needs to be restarted. (I'm also looking into this issue, but someone else handles that, management separation gone mad!) It's a remote client, so it would be a lot easier for me to connect to these systems which seem to self configure, to use that as a pivot via ssh tunnels to get to other remote devices to continue to manage them, while out main route is fixed. I guess, my questions are How can I configure IP6 without interfering with IP4, and On CentOS, can I influence this auto configuration I seem to be seeing?

    Read the article

  • What is causing ocassional white windows on my Mac?

    - by user63333
    Hello. I'm having a very strange problem with my Mac lately. When I'm working in an app and a new window pane or sheet is displayed, sometimes it comes up completely white. Once an app is having these problems, it will continue to bring up a blank screen for that particular window (although other windows work fine). After the app is relaunched, the window is fine again. What I'm noticing that's very strange is that although the interface turns completely white, the functions of the interface are still available. So I have to "navigate blindly" around the interface, until I can relaunch. This occurs throughout the operating system. Screenshots: This is what happened when I tried opening the File menu in Lightroom app. What happened to me on Lynda.com (in Firefox) after selecting the "Software..." dropdown. (All other dropdowns were fine. Reloading the page fixed it.) When I was decompressing a file, The Unarchiver launched and opened this white window. It still decompressed the file. This is what happened one time when I opened Finder (with TotalFinder) to my Downloads folder. This is something I've never seen before. This just started happening lately. What could be the problem? Thanks for your help. NOTE: since new users are not allowed to post images, just image blank white interface elements. And since new users also aren't allowed to post more than one link, here's the first screenshot:

    Read the article

  • Postfix relay all mail through SES except for one sending domain / address

    - by Kevin
    I'm thinking this is really really super simple, but I can't figure out what I need to do. I don't mess with Postfix much (Just let it run and do its thing) so I've got no idea where to even start with this. We have postfix currently configured to relay all mail out through SES using the code below. We need to modify this so that emails sent from one of our domains (domain.com) DO NOT go through SES. Everything else should continue to flow out through the SES connection. I'm assuming this is like a one line thing but my google skills are not helping me at all. relayhost = email-smtp.us-east-1.amazonaws.com:25 smtp_sasl_auth_enable = yes smtp_sasl_security_options = noanonymous smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_use_tls = yes smtp_tls_security_level = encrypt smtp_tls_note_starttls_offer = yes smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt smtp_destination_concurrency_limit = 450 Update I have created sender_transport file in /etc/postfix. In it is @domain.com smtp: I then ran this through postmap and placed sender_dependent_default_transport_maps = hash:/etc/postfix/sender_transport above the above block of code and restarted postfix, but still all email is going out through SES. Log after sending Oct 22 14:38:48 web postfix/smtp[19446]: 4B19D640002: to=<[email protected]>, relay=email-smtp.us-east-1.amazonaws.com[54.243.47.187]:25, delay=1.4, delays=0.01/0/0.92/0.44, dsn=2.0.0, status=sent (250 Ok 00000141e21b181f-ee6f7c4f-f0f5-4b0f-ba69-2db146a4f988-000000) Oct 22 14:38:48 web postfix/qmgr[19435]: 4B19D640002: removed I don't think this log is what you're looking for, but it's the only thing that is logged when mail goes out, and this is with me running /usr/sbin/postfix -v start manually and not with the init script.

    Read the article

  • Does this mean the router is faulty?

    - by Ashfame
    I have a router to which I have my desktop (running Ubuntu) connected via LAN & I use it on my phone via wifi. Sometimes it happen that the LAN one will stop working for no reason but the wifi will work fine. And it will resolve away by itself. Since last night, the router was restarting again & again on its own, so I lodged a complaint about it and they said the router is faulty and will be replaced, but I know they don't know anything about how things work & is just going to shoot an arrow in the dark. These restarts has happened for the first time, LAN-wifi issue described earlier is a common one (but not frequent one). So is the router faulty or there is some issue from my ISP side which will continue to persist even after they change the router? My very best guess is that they will replace it with an older refurbished router which will tend to give me more troubles in the coming time, so its better if I change it only its faulty (this is a new one - 6months old, I am its first hand user). I am happy to provide any details.

    Read the article

  • How browsers handle multiple IPs

    - by Sandman4
    Can someone direct me to information on exact browsers behavior when browser gets multiple A records for a given hostname (say ip1 and ip2), and one of them is not accessible. I interested in EXACT details, like (but not limited to): Will browser get 2 IPs from OS, or it will get only one ? Which ip will browser try first (random or always the first one) ? Now, let's say browser started with the failed ip1 For how long will browser try ip1 ? If user hits "stop" while it waits for ip1, and then clicks refresh which IP will browser try ? What will happen when it times-out - will it start trying ip2 or give error ? (And if error, which ip will browser try when user clicks refresh). When user clicks refresh, will any browser attempt new DNS lookup ? Now let's assume browser tried working ip2 first. For the next page request, will browser still use ip2, or it may randomly switch ips ? For how long browsers keep IPs in their cache ? When browsers sends a new DNS request, and get SAME ips, will it CONTINUE to use the same known-to-be-working IP, or the process starts from scratch and it may try any of the two ? Of course it all may be browser dependent, and may also vary between versions and platforms, I'd be happy to have maximum of details. The purpose of this - I'm trying to understand what exactly users will experience when round-robin DNS based used and one of the hosts fails. Please, I'm NOT asking about how bad DNS load balancing is, and please refrain from answering "don't do it", "it's a bad idea", "you need heartbeat/proxy/BGP/whatever" and so on.

    Read the article

  • Configure Apache + Passenger to serve static files from different directory

    - by Rory Fitzpatrick
    I'm trying to setup Apache and Passenger to serve a Rails app. However, I also need it to serve static files from a directory other than /public and give precedence to these static files over anything in the Rails app. The Rails app is in /home/user/apps/testapp and the static files in /home/user/public_html. For various reasons the static files cannot simply be moved to the Rails public folder. Also note that the root http://domain.com/ should be served by the index.html file in the public_html folder. Here is the config I'm using: <VirtualHost *:80> ServerName domain.com DocumentRoot /home/user/apps/testapp/public RewriteEngine On RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -f RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -d RewriteRule ^/(.*)$ /home/user/public_html/$1 [L] </VirtualHost> This serves the Rails application fine but gives 404 for any static content from public_html. I have also tried a configuration that uses DocumentRoot /home/user/public_html but this doesn't serve the Rails app at all, presumably because Passenger doesn't know to process the request. Interestingly, if I change the conditions to !-f and !-d and the rewrite rule to redirecto to another domain, it works as expected (e.g. http://domain.com/doesnt_exist gets redirected to http://otherdomain.com/doesnt_exist) How can I configure Apache to serve static files like this, but allow all other requests to continue to Passenger?

    Read the article

  • Good Enough Failover Strategy for DNS / MySQL / Email

    - by IMB
    I've asked and read a lot questions regarding DNS failover but the more I read the more complicated it becomes, some people say it's good enough some say it isn't. No clear answers from what I read. I was wondering if we can set it straight once and for all, at least for the requirements of most websites out there. Right now let's assume the following: We don't need really need load-balancing, what we need is a failover solution. We are running a website based on LAMP on a VPS. We need to make sure that the Web Server, MySQL, Email are always accessible if not 99%. Basically here's my idea and questions about it: Web Server: We need at least one failover server (another VPS on a separate data center). Is DNS Failover via Round Robin good, if not, what's the best? And how do you exactly implement it? How do you make the files you upload/delete on Server A is also on Server B? MySQL: I've only read a brief intro to MySQL replication and I assume that I can replicate Server A to Server B and vice versa on the fly right? So just it case Server A fails and Server B is now running, it will continue to work and replicate to Server A when it becomes available. So in essence Server B is now the primary server, and will later on failover to Server A, should a failure happen again. Email: If we are gonna use DNS Failover, using webmail or relying on emails stored on the server is probably not a good idea right? Since some emails might be on Server A while some might be on Server B? I assume a basic email forwarder to a 3rdparty is good enough (like Gmail for example) to ensure all emails are kept in one place. Here's a basic diagram for a better picture: http://i.stack.imgur.com/KWSIi.png

    Read the article

  • Juniper router dropping pings to external interface

    - by Alexander Garden
    My organization has a Juniper SSG20-WLAN that routes our traffic to the outside world. We've been having intermittent problems with our internet connection so I wrote up a Python script to ping the internal interface of the router, the external interface, a couple of our internal servers, the ISP router our router talks to, their upstream provider, and Google and Yahoo for good measure. It does that about every minute. What I have found is that when our internet goes out, our Juniper router ceases responding to pings on the external interface. Everything past that is, of course, unreachable. The internal interface and our internal servers continue to echo back without interruption. None of the counters indicate dropped packets of any type. They all look normal. The logs complain about VIP servers being unavailable but otherwise nothing indicative of network issues. My questions are these: Does this exonerate our ISP? Or, contrawise, might a problem with the connection be causing the external interface to go down? Is there somewhere else in the SSG20, beside the system log and counters, that might help me track down info on the problem? UPDATE: Turned out that one of the switches between my monitoring box and the router was a router itself, and occasionally diverting from the gateway to itself. Kudos to those who made suggestions along those lines. Not really sure which answer to mark as accepted, as it was really stuff in the comments that turned out to be right. Thanks for the suggestions.

    Read the article

  • Requests are making it to my app server, but not into node.js -- why?

    - by Zane Claes
    I detailed in this question on StackOverflow how some random requests are not making it from the client to my Node.js app server, resulting in a gateway timeout. In summary, identical requests are, at random, not even making it far enough to trigger a console.log() in my first line of express middleware. I need to narrow down the problem, though, to find out WHERE the traffic is being lost and it was suggested that I try a packet sniffer on my app servers. Here's my setup: 2x Load Balancers (m1.larges) 2x node.js servers (also m1.large) Here's what's interesting/unusual: the node.js servers started as PHP servers with an Apache stack and continue to serve PHP files for my domain (streamified.me). However, I use a little httpd.conf magic on the app servers so that requests to api.streamified.me get routed over port 8888 to the node.js server: RewriteCond %{HTTP_HOST} ^api.streamified.me RewriteRule ^(.*) http://localhost:8888$1 [P] So, the request hits the load balancer = goes to an app server = gets routed to port 8888 if it's intended for the API = gets handled by node.js So, in the same httpd.conf file, I turned on RewriteLogLevel 5 and then created a simple PHP+CURL script on my localhost to hit my api.streamified.me with a random URL (which should cause node.js to trigger a simple "not found" response) until it resulted in a Gateway timeout. Here, you can see that it has happened -- and the rewrite log shows that the request was definitely received by the app server and forwarded to port 8888... but it was never received by node.js (or, at least, the first line of code in the first line of middleware never gets it...) Image Link: http://i.stack.imgur.com/3OQxS.png

    Read the article

  • Sun Power Button Won't Shut Down System

    - by user36680
    Background: We are running NIS and have NFS mounts from a Solaris 10 workstation to a Solaris 8 server. If the workstation loses its network connection for some reason, when I look at the workstation's console I see repeated messages of the form: <date> <time> <hostname> ypbind[<pid>]: NIS server not responding for domain "<domain>"; still trying. If I try to login at the console as a user, it won't work because it can't authenticate my account through NIS. Also, it won't return to a login prompt again, so I can't log in as root. If I press the power button (don't hold it in) on the workstation, I see: <date> <time> <hostname> power: WARNING: Power off requested from power button or SC, powering down the system! Shutdown started. <date> <time> Changing to init state 5 - please wait. <date> <time+2 minutes> <hostname> power: WARNING: Failed to shut down the system! And continue to see messages of the form: <date> <time> <hostname> ypbind[<pid>]: NIS server not responding for domain "<domain>"; still trying. So, the questions are How do I make NIS stop trying (because I know it will fail)? Why won't it shut down?

    Read the article

  • System occasionally hangs boot process with SLES 11

    - by ThaMe90
    I have several (new) systems on which I had to install SLES11 on. However, after a few (though not every) reboots, the system hangs during the boot sequence. It will only continue after I physically press a key on the keyboard. From what I've found in the dmesg log from a failed boot is the following: [ 22.170276] sd 0:0:0:0: [sda] Mode Sense: b7 00 00 08 [ 22.171155] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 22.182760] sda: sda1 sda2 sda3 [ 22.383424] sd 0:0:0:0: [sda] Attached SCSI disk [ 22.545372] PM: Marking nosave pages: 000000000009a000 - 0000000000100000 [ 22.545377] PM: Marking nosave pages: 00000000bf780000 - 0000000100000000 [ 22.546217] PM: Basic memory bitmaps created [ 22.590380] PM: Basic memory bitmaps freed [ 22.596284] PM: Starting manual resume from disk [ 22.602319] PM: Resume from partition 8:1 [ 22.602321] PM: Checking hibernation image. [ 22.602479] PM: Error -22 checking image file [ 22.602481] PM: Resume from disk failed. [ 22.718727] kjournald starting. Commit interval 15 seconds [ 22.718960] EXT3-fs (sda3): using internal journal [ 22.718964] EXT3-fs (sda3): mounted filesystem with ordered data mode [ 1555.644404] udevd version 128 started [ 1555.697664] input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 [ 1555.707961] ACPI: Power Button [PWRB] I've looked around the internet for the PM: Resume from disk failed. message, but this seems to only be important when restoring the system after a hybernate, i.e. restore from the hdd. But this is not my situation. I only get this after a reboot, as I said before. The timestamp [ 1555.xxxxxx] is only the result of me pressing a key on the keyboard. Any suggestions on how to proceed? As I am getting stuck on this issue.

    Read the article

  • AWS: Multi-region setup using single RDS instance

    - by Ion
    I'm trying to scale our web application (PHP, MySQL, memcache) in a multi-region scheme. Currently we are using a setup with two EC2 instances behind an ELB and an RDS instance, all of them in US-EAST (Virginia) region. We would like to have a presence in the EU (Ireland) region as well. This means at least a new EC2 instance there (identical to the others, serving the same application). I have copied the desired AMI, setup the new instance, setup a same ELB configuration (required for SSL termination) and configured latency-based routing in Route53. And it works as suggested. But, clients from EU have speed problems. This is due to the fact that the EU EC2 instances connect to the US-based RDS instance. As far as I know Amazon has not yet enabled RDS multi-region replication. Do you have any suggestions on how to properly speed up the whole setup while using the single RDS instance? Also, any ideas in general on how to scale things up? Ideally we would like to continue using the RDS technology for various reasons. Nevertheless, I am open to suggestions (I guess the next idea would be to host our own MySQL servers).

    Read the article

  • How to backup a large FreeNAS?

    - by Ze'ev
    We have a 12TB FreeNAS box in the office, and are looking for a way to keep a backup of it offsite. We're considering (1) tape; (2) a bunch of bare drives (popped into a spare hotswap bay); (3) external drives. Any advice on which solution is best? (Online backup is not an option because our internet connection is too slow.) And, is there some software that will keep track of which files have been backed up and which haven't? So that when one backup unit fills up, we can continue the backup on the next? (We don't want to have to back up to a 12TB device.) This software could run, preferably, on the NAS itself; or from one of our Mac clients. Our goal is a situation where we attach some backup device; it automatically fills up with stuff from the server; the contents of this unit are catalogued somewhere something prompts us to replace with a fresh drive/tape; backup continues until full, including any files that have changed since being backed up.

    Read the article

  • Linux Live CD only works when Windows is in Legacy mode?

    - by Vee
    I have asked a similar question before and no one was able to help me but I think it was because I wasn't phrasing it properly. This is a better restatement of the question. I have Windows 8 and Linux Mint dual booted on my pc. When I tried to boot the Linux from a CD ROM only, it would give me the following error: error: failure reading sector 0x0 from 'hd1' error: you need to load the kernel first. Press any key to continue... The Linux Mint works fine but otherwise, but it gives this error when I try to boot from CD. The boot Linux from CD only worked when I changed the Windows to Legacy mode in the BIOS settings. When I changed it back to UEFI, it would give the same error. Why is this? How can I fix it? I am somewhat new so is there anything else I should know about all of this? NOTE: I changed the Linux into UEFI mode using boot-repair but that still did not solve the problem when I tried to boot from CD ROM.

    Read the article

  • How do I stop linux from trying to mount android phone as usb storage?

    - by user1160711
    When I plug in my Motorola Triumph to my fedora 17 linux box USB port, I get an endless series of errors on the linux box as it desperately attempts to mount the phone as a USB drive. Stuff like this: Jun 23 10:26:00 zooty kernel: [528926.714884] end_request: critical target error, dev sdg, sector 4 Jun 23 10:26:00 zooty kernel: [528926.715865] sd 16:0:0:1: [sdg] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 23 10:26:00 zooty kernel: [528926.715869] sd 16:0:0:1: [sdg] Sense Key : Illegal Request [current] Jun 23 10:26:00 zooty kernel: [528926.715872] sd 16:0:0:1: [sdg] Add. Sense: Invalid field in cdb Jun 23 10:26:00 zooty kernel: [528926.715876] sd 16:0:0:1: [sdg] CDB: Read(10): 28 20 00 00 00 00 00 00 04 00 If I go ahead and tell the phone to allow linux to mount the USB storage, the messages stop, and I get a mounted drive, but if all I want to do is use the debug bridge, my log on linux will continue to fill with this junk. Is there some udev magic I can do to make the system ignore this particular device as far as usb storage goes? I just noticed that if I tell the phone to enable USB storage, let linux recognize the new disk, then tell the phone to disable USB storage again, I get one additional log message about capacity changing to zero, but the endless spew of messages stops, so I guess one work around is to enable and disable USB right away.

    Read the article

  • Windows 7 Startup fails after installation

    - by Nadav S.
    I have installed Windows 7 Ultimate a week ago. Yesterday I've noticed that the SP1 update is available. After installation of SP1 the computer failed to startup, and had shown a black screen. I couldn't even go to safe mode and F8 didn't help. After some fixing attempts including system restore, startup repair, bootmgr & BCD rebuilding from CMD, I have decided to reinstall Windows (after all, it's only a week old) After the successful installation, I have decided to first install all available updates before I continue. So after the updates download & installation, the computer rebooted. And now I didn't see the black screen again - but in the "Starting Windows..." screen the logo didn't appear and the system didn't startup. Here I could go to safe mode selection window, but it didn't work either ("Loading Windows Files" but nothing). I've also tried: I thought that the CD was corrupted, so I've used a fresh new disk of Windows, tried x86 & x64 versions, the same symptoms, no change. Resetted the BIOS to default, no change. Memory diagnostic HDD diagnostic Restarted Windows WITHOUT INSTALLING UPDATES, but it had the same symptom, so maybe Windows Update is not the case?! I've tried installing it so many times - that I am simply stuck - I can't "reinstall Windows, because it is corrupted...". Maybe the HDD is corrupted? I've also checked it and didn't find a problem.

    Read the article

  • How can you force the start and end date of a task in Microsoft Project to be on the same day?

    - by Hauke P.
    I have a task called "Interview person A about topic X". The task's duration is set to 2 hours. The start date of the task should automatically be calculated taking dependencies and resource availabilities into account. My question boils down to: How can I force this task to start and end on the same date? Background: In my case, Microsoft Project sets the start date to a Friday at 5pm. As my working hours are set to 8am to 12am and 1pm to 6pm (Mon-Fri), Microsoft Project "splits up" the task at 6pm on Friday and plans to continue it at 8am on the following Monday. However, it does not make any sense to stop the interview on a Friday and restart it on Monday. Therefore the automatic suggestion is not helpful in this case. That's why I'm looking for a way way to force the task to start and end on the very same day. (In my example, I'd like Microsoft Project to delay the start date of the task until Monday 8am as this is the first time slot in which the task "fits in completely".) By the way: I have lots of such cases... for that reason it would be really great if there was a solution that doesn't just deal with this single special case.)

    Read the article

  • keyboard intermittently stops working, even after reinstalling windows 7; possibly a Chrome issue?

    - by neverskipbreakfast
    My keyboard intermittently stops working. Sometimes a couple of keys will work, but usually none. Sometimes if I mash a couple of the ctrl+alt+windows type keys randomly for a bit, the keyboard will let me type one more letter before stopping again. Sometimes the keys will open a program menu, but usually not. I have even completely wiped my machine and reinstalled windows 7; the problem continues. Specs: Intel iMac (early 2006, 2.0GHz, 2MB RAM, 240GB HD) running ONLY Windows 7 Professional, 32-bit (NOT through boot camp) and using a USB keyboard (Saitek Eclipse II.) *Unplugging & reconnecting keyboard does NOT fix it. *Connecting a different keyboard does NOT fix it. That one won't work, either. *Drivers are up-to-date. Removing and reinstalling drivers does NOT fix it. *Restarting the computer does NOT fix it. In fact, when the Windows logon screen appears, they keyboard won't work, and neither will the icon to pull up the on-screen keyboard. Otherwise my mouse can click around just fine. I can only log onto a non-password protected account. *Generally, logging into as different Windows user fixes it. I can then log back on to my main user account and continue work for a few hours until it happens again. *Clearing my Chrome browsing data stopped the problem from recurring for a week or so. *I use Avira free antivirus software, and repeated scans turn up nothing fishy. *I have already REINSTALLED Windows 7 (not just a restore.) The problem returned after 2 days of use. The only thing I suspect is something in Google Chrome -- I used my google account to reload all my previous Chrome extensions, saved data, etc. (Chrome Extensions Installed: AdBlock, Better Google Tasks, DropBox, FB Photo Zoom, Google Mail Checker, StayFocusd.) Any ideas? Any at all?

    Read the article

  • Install Exchange 2013 with DSC

    - by Alain Laventure
    I tried to install Exchange 2013 with the resource windowsProcess in existing Exchange Configuration. All prerequisites are installed (the Exchange Organization still exists). This is my Resource section: WindowsProcess Exchange2013 { Credential=$credential Path= "C:\Sources\Cumulative Update 5 for Exchange Server 2013 (KB2936880)\Setup.exe" Arguments= "/mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms /TargetDir:C:\EX2013" Ensure= "Present" } #End Filter } #End Node } # End configuration /* @TargetNode='TargetDSC02' @GeneratedBy=exadmin @GenerationDate=08/02/2014 08:16:03 @GenerationHost=SOURCEDSC02 */ instance of MSFT_Credential as $MSFT_Credential1ref { Password = "Password1"; UserName = "S05\\Exadmin"; }; Exadmin is a member of Orgaganization Management Group and it is also member of Domain Admin Group, to be able to install Exchange When I execute this resource , Exchange Installation Start but after 1 minute the installation stops with this error: Failed [Rule:GlobalServerInstall] [Message:You must be a member of the 'Organization Management' role group or a member of the 'Enterprise Admins' group to continue.] To be sure that the right is really the problem I create a special User with only Administrator right of the Exchange server and with no Exchange Permission I run manually on the new Exchange server .\Setup.exe /mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms /Targetdir:C:\EX2013 And I got the Same error that with DSC. After I add my test user in the Organization Management Group and I run again manually .\Setup.exe /mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms /Targetdir:C:\EX2013 And the Exchange 2013 installation finish without any error. That prove that the problem with DSC is Permission right.

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >