Search Results

Search found 48592 results on 1944 pages for 'cannot start'.

Page 396/1944 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • logparser Message with error codes

    - by nsr81
    Hi All, Is there anyway to get complete error message using LogParser? When I run the following query: logparser -i:EVT -o:NAT "SELECT TimeGenerated,EventID,Message from System WHERE EventTypeName='Error event'" I get the following output: 2009-09-02 19:35:44 7000 The USB Mass Storage Driver service failed to start due to the following error: %%1058 The full "Message" in EventViewer is: Description: The USB Mass Storage Driver service failed to start due to the following error: The service cannot be started, either because it is disabled or because it has no enabled devices associated with it. How can I obtain complete message using logparser?

    Read the article

  • hadoop install cluster

    - by chnet
    Is it possible to run hadoop on multi-node and these nodes install hadoop on different paths? I noticed the tutorial, such as Michael Noll's, http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/, asks hadoop to be installed on different nodes with the same installation path. I tried to run hadoop on multi-node. Each node installed hadoop on different path. But not success. I noticed that the hadoop assume installation paths are the same in default. The problem is when I execute start-dfs.sh it tries to start hadoop deamons on the salve nodes from the same path as on master. "Whereas the installation path is different. Were the relevant files modified to reflect the installed locations on each node"? Which file I should modify to reflect the installed locations on each node?

    Read the article

  • DRS: Unknown JNLP Location

    - by Joe
    We are using Deployment Rule Sets to limit access to the older JRE to well-known applications like - but are running into a problem. One business critical applications has the following properties (*s to protect info): title: Enterprise Services Repository location: null jar location: http://app.*.com:52400/rep/repository/*.jar jar version: null isArtifact: true The application downloads a .jnlp file, and uses java web start to execute. Since the location is null, this application cannot be targeted by a location rule. And the certificate hash method only works when the application is cached (being ran more than once). If cache storing is off, which is the case in some situations, how can this application be targeted? Or at least told to run with an older JRE on start? This problem is specifically noted in this bug Thanks!

    Read the article

  • Does WebDAV even work on IIS 7? I say nay

    - by FlavorScape
    I've tried every configuration from the top 10 stack overflow and server fault results for WebDAV 405 on IIS (for verb PROPFIND and PUT). I'm running server 2008 SP2. Followed all the instructions here. I'm no stranger to configuring servers. This has gotten nowhere after 8 hours. Confirmed system.webserver in applicationhost.config: <add name="WebDAV" path="*" verb="PROPFIND,PROPPATCH,MKCOL,PUT,COPY,DELETE,MOVE,LOCK,UNLOCK" modules="WebDAVModule" resourceType="Unspecified" requireAccess="None" /> Port 443 with basic auth, same issue. Tried port 80 with windows auth. Broken. (405) Windows authentication. Check. Added authoring rules for default site and application. Check. Not the firewall. Check. added "Desktop Experience" role feature Tried HTTPS with Basic Authentication on port 443. Does not work. No other services are running like Sharepoint. Check. confirmed user has read/write NT level permissions for the folder/virtual dir tried net use * http://localhost /user:MYDOMAIN\me myPass get error 1920, if I don't authenticate I get error 67 confirmed I'm not applying filtering to WebDAV: <requestFiltering> <fileExtensions applyToWebDAV="false" /> <verbs applyToWebDAV="false" /> <hiddenSegments applyToWebDAV="false" /> 405 - HTTP verb used to access this page is not allowed. The page you are looking for cannot be displayed because an invalid method (HTTP verb) was used to attempt access. SHOULD I JUST GIVE UP? Other questions that helped none: 405 - ‘Method not Allowed’ adding service hosted in IIS7 webdav on iis7.5 - simply cannot make it work http://studentguru.gr/b/kingherc/archive/2009/11/21/webdav-for-iis-7-on-windows-server-2008-r2.aspx

    Read the article

  • Ubuntu Server 12.04 LTS on Hyper-V 2012

    - by user137533
    I have the following scenario: Hyper-V 2012 server core installation. On top of this i created a virtual machine on which i tried installing Ubuntu Server 12.04 which should not have any compatibility issues according to what Microsoft and Ubuntu are saying (although it is not officially supported). I start, run the installation and everything is ok, no problems detecting the network device or the hard drive (unlike debian which didn't even detect the hard drive). Once the installation is complete it asks me to reboot, it unmounts the "dvd drive" and reboots. Once it tries to start again i get the following error: Boot failure. Reboot and Select proper Boot device or Insert Boot Media in the selected Boot device. It seems to not be booting up from the virtual hard drive. The hard drive is set up in SCSI mode, nothing mounted on the IDE controller (no iso image or anything else. Does anyone have any ideas on what i can do to solve this?

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • How to change mod_rewrite to avoid {REQUEST_FILENAME} in order to get around 255 character URL limit?

    - by Jeremy Reimer
    According to this answer: max length of url 257 characters for mod_rewrite? there is a maximum 255 character hard limit based on the file system for using mod_rewrite. According to the accepted answer, there are two solutions: Change the URL format of your application to a max of 255 characters between each slash. Move the Rewrite rules into the apache virtual host config and remove the REQUEST_FILENAME. I cannot use the first method, so I am trying to figure out the second. I have put the Rewrite rules into the Apache virtual host config as requested. However I cannot figure out how to remove the REQUEST_FILENAME and still have my web application framework (Dragonfly) still work. Here is the portion of the rewrite rules that I moved from .htaccess into the virtual host config file of Apache: RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-l RewriteCond %{REQUEST_FILENAME} !-f [OR] # if don't want Dragonfly to process html files comment # out the line below (you may need to remove the [OR] above too). RewriteCond %{REQUEST_FILENAME} \.(html|nl)$ # Main URL rewriting. RewriteRule (.*) index.cgi?$1 [L,QSA] I've tried removing {REQUEST_FILENAME} and it just breaks the framework in various ways. How do I rewrite this without using {REQUEST_FILENAME}?

    Read the article

  • Synergy: server refused client

    - by Tom
    I am trying to get Synergy up and running from a new Windows 7 computer, to an XP computer. The Synergy installations seemed to go fine. I configured each to start on start-up, which they seem to, but the client won't connect. I get repeated: "server refused client with our name" messages in the log output. The Synergy FAQ says to "Add the client to the server's configuration file." But I can't seem to find an instruction set on how to do this. I've looked and looked, but I'm lost...

    Read the article

  • Apache not running from rc.d on FreeBSD

    - by Oksana Molotova
    I'm using FreeBSD 8.3 and Apache 2.2. I didn't install Apache from ports, instead compiled it from source because I wanted to move the binary and configuration to a different path (I'm centering all of the major production daemons and their configurations in a single place). In any case, I based the /usr/local/etc/rc.d/apache22 file on one from a different server where it was installed from ports, I only modified the binary and config paths within. I can manually execute it with /usr/local/etc/rc.d/apache22 start, however even with apache22_enable="YES" in /etc/rc.conf it fails to start. All permissions and ownership are identical to the other server where it works. What am I missing and is there a way to debug this kind of thing?

    Read the article

  • How do I resolve BSOD: PAGE_FAULT_IN_NONPAGED_AREA?

    - by Burnzy
    I have been trouble shooting this for a few days and cannot fix this anyhow. Computer specifications Mobo: ASUS Sabertooth X58 LGA 1366 Intel X58 SATA 6Gb/s USB 3.0 ATX Intel Motherboard CPU: Intel(R) Core(TM) i7 CPU 920 (Bloomfield) @ 2.67 ( no OC ) RAM: 6144MB RAM GPU: 2x NVIDIA GeForce GTS 250 1Go in SLI (sli is not enabled anyway at the moment anyway) Drives: OCZ RevoDrive OCZSSDPX-1RVD0120 PCI-E x4 120GB PCI Express MLC Internal SSD [RAID-0]. (I know this could potentilly cause trouble but I had the BSOD before using this drive) Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM 32MB Cache SATA 3.0Gb/s 3.5" Internal Hard Drive - Bare Drive Click here for a log of a crash I just had. Click here for a log of a crash I had 30 minutes later, note that it's another driver. Some info Occurence: It seems pretty random so far, haven't noticed any kind of pattern I tried: Windows memory diagnostic (went smoothly at 1066mhz) As I said, it was still happening on my HDD, so when I bought the revodrive I install a new OS on there and still got the error, I believed it happened and I had no drivers installed at that point (not 100% sure) Change the following registry value to 1 (true): HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SessionManager \MemoryManagement\ClearPageFileAtShutdown Tried to lower even more ram clock Made sure ram timing was set to recommended by manufacturer Verified if motherboard was in good physical condition (yes and its brand new) There is one thing to note, when I got the new motherboard, I installed the new drivers WITHOUT formatting and the I removed the motherboard drivers that I could remove from the control panel (pretty much the first things that have been installed). Could this cause an issue even ON THE OTHER drive (revodrive). Hopefully someone can help me, I am getting tired of this, spending so much money and cannot get this to work correctly. If you need any other information let me know, thank you!

    Read the article

  • MacOSX: remove write-protect flag from file in Terminal

    - by Albert
    Hi, I have a file on a FAT32 volume which is shown as write-protected in Finder (so I cannot move it). Removing that write-protected flag in the information dialog works just fine. However, I have many more such files and I thus want to do it via Terminal. I already tried via 'chmod +w' but that didn't worked. 'ls -la' showed me that they are already just fine ("-rwxrwxrwx 1 az az " where az is my user account). Then I thought this might be stored in some xattr properties but 'xattr -l' didn't gave me any entry. Then I thought this might be some ACL setting (whereby I thought they would be stored as xattr but let's try it anyway) - and some Google search returned me something with 'chmod -a' or 'chmod -i' or so. All these tries only give me chmod: No ACL currently associated with file" or chmod: Failed to set ACL on file...: Operation not permitted". But I definitly have no write access to the file because I cannot move it or do any other change to it (in Terminal). Removing the write-access flag in Finder solves that.

    Read the article

  • Can't delete some directories as Admin

    - by PencilPusher77
    I am unable to delete the following directory in Windows 7 C:\ProgramData\Microsoft\Windows\Start Menu\Programs\iTunes The error message that is displayed when I try is: "You need permission to perform this action. You require permission from the computer's administrator to make changes to this folder." I am an Administrator and have UAC set to "Never Notify". iTunes is not running either (it's been uninstalled). I have tried running cmd.exe using "Run as administrator" from the right click context menu, then executing rmdir "C:\ProgramData\Microsoft\Windows\Start Menu\Programs" but it just returns "Access is denied." Any ideas why I can't delete this dir? Thanks!

    Read the article

  • Apache times out after 2 minutes, when Apache TimeOut is 1200

    - by Robert Gowland
    We have a set up where the browser makes an http request to Box A which in turn makes an http request to Box B. What we're encountering is that Box A waits for Box B to respond for two minutes then the users sees: The page cannot be displayed Explanation: There is a problem with the page you are trying to reach and it cannot be displayed. Try the following: * Refresh page: Search for the page again by clicking the Refresh button. The timeout may have occurred due to Internet congestion. * Check spelling: Check that you typed the Web page address correctly. The address may have been mistyped. * Access from a link: If there is a link to the page you are looking for, try accessing the page from that link. Technical Information (for support personnel) * Error Code: 404 Not Found. The requested item could not be located. (12028) Watching the logs on Box B we see that it takes 5 minutes to do the work requested. The problem is that the apache time outs on both boxes are set 1200 (20m), not 120 (2m). Any ideas where to look?

    Read the article

  • http://localhost/~admin/ gets a 403

    - by Pavan Katepalli
    When I go to localhost/~admin/ or 127.0.0.1/~admin/ my browser says: "Forbidden You don't have permission to access /~admin/ on this server." How do I change this?!??!?! It's driving me nuts! when I go to localhost or 127.0.0.1/ my browser says: "It Works!". I'm running mac osx 10.8. I created aliases in my .bash_profile file so that I can start, restart and stop Apache quickly: alias startApache="sudo apachectl start" alias stopApache="sudo apachectl stop" alias restartApache="sudo apachectl restart" In my /etc/apache2/httpd.conf file I turned on php5: LoadModule php5_module libexec/apache2/libphp5.so I also made sure to change the permissions for my admin.conf file with this command in terminal: sudo chmod 644 username.conf This is my /etc/apache2/users/admin.conf: <Directory "/Users/admin/Sites/"> Options Indexes MultiViews AllowOverride All Order allow,deny Allow from all </Directory>

    Read the article

  • Virtualbox 4 hangs when trying to install ubuntu guest on ubuntu host system

    - by misterjinx
    I'm trying to install ubuntu server using virtualbox 4.0.4 in a ubuntu 10.10 host OS. I have the iso image on my hard drive which I used to perform the install. I edited the settings and added this image at the storage section, selected it as a primary master, so I could boot from it and start the install process. But now, each time I start the installation, at the very beginning or if I'm lucky, after I click on the install link at the welcome screen, the process hangs and all the computer is blocked. This happened 3 times already. I even tried to perform the installation using an old CD I had with the 9.10 server version, thinking that the iso image might be the issue, but the problem still persists. I dont know what could cause this problem. My computer is a dell laptop with AMD processor (I dont know if this is important). Any help is very appreciated.

    Read the article

  • Are relative-path symlinks reliable on Rackspace Cloud Sites?

    - by Jakobud
    Rackspace's Cloud Sites have a lot of stupid limitations. For example, no SSH (in or out), no shell, no RSYNC, etc... (even through cron). Recently I learned that you can't reliably use symlinks in Cloud Sites. Apparently this is because the absolute path of your sites could change at any moment, since it's a shared host environment split up between many disks/servers. I guess different account's sites get moved from disk to disk whenever Rackspace decides to. Supposedly to increase efficiency across the board. So after talking with a Rackspace tech, he said they cannot guarantee that symlinks would always work. Obviously this is because if you have a symlink that use's an absolute path like this: //mnt/disk-34566/home/user34566/files/sites/www.mysite.com/mydir If you files go moved to a different disk (or whatever they do), then the absolute path would be different and the link would now be broken. That makes sense. So next, I asked the Rackspace tech if relative path symlinks were reliable. So if I have the following link: files/sites/www.mysite.com/mylink --> ../www.myothersite.com/anotherdir You can see that the symlink simply points to a nearby directory's sub-directory. He said they cannot guarantee that even those would always work either. Since it uses a relative path to another nearby directory I'm not sure how it could ever break from something Rackspace would do. Do relative symlinks somehow rely on absolute paths underneath? Or is Rackspace using some weird custom filesystem where they will break from absolute path changes? It seems like a relative-path symlink would be fine and would only break if the user did something to mess up the directories involved. But when the tech's say that they "don't officially support symlinks of any kind" that makes me hesitant to use them for large commercial websites in Cloud Sites. Can anyone with Rackspace experience give input on this topic?

    Read the article

  • Active directory authentication for Ubuntu Linux login and cifs mounting home directories...

    - by Jamie
    I've configured my Ubuntu 10.04 Server LTS Beta 2 residing on a windows network to authenticate logins using active directory, then mount a windows share to serve as there home directory. Here is what I did starting from the initial installation of Ubuntu. Download and install Ubuntu Server 10.04 LTS Beta 2 Get updates # sudo apt-get update && sudo apt-get upgrade Install an SSH server (sshd) # sudo apt-get install openssh-server Some would argue that you should "lock sshd down" by disabling root logins. I figure if your smart enough to hack an ssh session for a root password, you're probably not going to be thwarted by the addition of PermitRootLogin no in the /etc/ssh/sshd_config file. If your paranoid or not simply not convinced then edit the file or give the following a spin: # (grep PermitRootLogin /etc/ssh/sshd_conifg && sudo sed -ri 's/PermitRootLogin ).+/\1no/' /etc/ssh/sshd_conifg) || echo "PermitRootLogin not found. Add it manually." Install required packages # sudo apt-get install winbind samba smbfs smbclient ntp krb5-user Do some basic networking housecleaning in preparation for the specific package configurations to come. Determine your windows domain name, DNS server name, and IP address for the active directory server (for samba). For conveniance I set environment variables for the windows domain and DNS server. For me it was (my AD IP address was 192.168.20.11): # WINDOMAIN=mydomain.local && WINDNS=srv1.$WINDOMAIN If you want to figure out what your domain and DNS server is (I was contractor and didn't know the network) check out this helpful reference. The authentication and file sharing processes for the Windows and Linux boxes need to have their clocks agree. Do this with an NTP service, and on the server version of Ubuntu the NTP service comes installed and preconfigured. The network I was joining had the DNS server serving up the NTP service too. # sudo sed -ri "s/^(server[ \t]).+/\1$WINDNS/" /etc/ntp.conf Restart the NTP daemon # sudo /etc/init.d/ntp restart We need to christen the Linux box on the new network, this is done by editing the host file (replace the DNS of with the FQDN of the windows DNS): # sudo sed -ri "s/^(127\.0\.0\.1[ \t]).*/\1$(hostname).$WINDOMAIN localhost $(hostname)/" /etc/hosts Kerberos configuration. The instructions that follow here aren't to be taken literally: the values for MYDOMAIN.LOCAL and srv1.mydomain.local need to be replaced with what's appropriate for your network when you edit the files. Edit the (previously installed above) /etc/krb5.conf file. Find the [libdefaults] section and change (or add) the key value pair (and it is in UPPERCASE WHERE IT NEEDS TO BE): [libdefaults] default_realm = MYDOMAIN.LOCAL Add the following to the [realms] section of the file: MYDOMAIN.LOCAL = { kdc = srv1.mydomain.local admin_server = srv1.mydomain.local default_domain = MYDOMAIN.LOCAL } Add the following to the [domain_realm] section of the file: .mydomain.local = MYDOMAIN.LOCAL mydomain.local = MYDOMAIN.LOCAL Conmfigure samba. When it's all said done, I don't know where SAMBA fits in ... I used cifs to mount the windows shares ... regardless, my system works and this is how I did it. Replace /etc/samba/smb.conf (remember I was working from a clean distro of Ubuntu, so I wasn't worried about breaking anything): [global] security = ads realm = MYDOMAIN.LOCAL password server = 192.168.20.11 workgroup = MYDOMAIN idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = yes winbind enum groups = yes template homedir = /home/%D/%U template shell = /bin/bash client use spnego = yes client ntlmv2 auth = yes encrypt passwords = yes winbind use default domain = yes restrict anonymous = 2 Start and stop various services. # sudo /etc/init.d/winbind stop # sudo service smbd restart # sudo /etc/init.d/winbind start Setup the authentication. Edit the /etc/nsswitch.conf. Here are the contents of mine: passwd: compat winbind group: compat winbind shadow: compat winbind hosts: files dns networks: files protocols: db files services: db files ethers: db files rpc: db files Start and stop various services. # sudo /etc/init.d/winbind stop # sudo service smbd restart # sudo /etc/init.d/winbind start At this point I could login, home directories didn't exist, but I could login. Later I'll come back and add how I got the cifs automounting to work. Numerous resources were considered so I could figure this out. Here is a short list (a number of these links point to mine own questions on the topic): Samba Kerberos Active Directory WinBind Mounting Linux user home directories on CIFS server Authenticating OpenBSD against Active Directory How to use Active Directory to authenticate linux users Mounting windows shares with Active Directory permissions Using Active Directory authentication with Samba on Ubuntu 9.10 server 64bit How practical is to authenticate a Linux server against AD? Auto-mounting a windows share on Linux AD login

    Read the article

  • SharePoint Records Center Submitted E-mail Records not picked up

    - by Kenneth Verburg
    We have set up a new SharePoint 2007 site with a Records Repository. We're using Exchange 2007 Managed Folders to route e-mails to this repository based on the 'label' attached to the e-mail as set in the Exchange 2007 journaling options. E-mails added to a Managed Folder get sent to SharePoint, they end up in the "Submitted E-mail Records" list of the Records Repository. That's according to plan, but the e-mails are not routed to the respective document library as defined by the label. Instead an error appears in the event viewer for every e-mail listed in the Submitted E-mail Records list, on every interval of the records repository schedule (set to every two minutes for testing purposes): Value cannot be null, parameter name: g. Sending a document from the SharePoint site iself to the Records Repository via the Send To... link works fine, but e-mails get stuck in the list... We have set Document Libraries in the Respository with and without content types (with matching names with the Label and the Record Routing rule set). Any ideas what could be wrong? This is in the event log: Every two minutes the following error appears in the Application Log: Source: Office SharePoint Server Category: Records Center Type: Error Event ID: 4975 User: N/A Computer: SPS2007 Description: Value cannot be null. Parameter name: g For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • SQL Server restore a backup results in an error.

    - by Mario
    I have a database in dev (SQL Server 2005 on Windows Server 2008) that I need to move to prod (SQL Server 2000 on Windows Server 2003). My process is as follows: Login to dev, open SQL Server Management Studio Right click on the database | Tasks | Backup. Keep all default options (full backup etc.) Move .bak file locally to prod (no network drive), login to prod, open SQL Server Enterprise Manager. Right click Databases node | All Tasks | Restore database. Change Restore as database to reflect the same database name. Click radio button 'From device'. Click 'Select Devices' Click Restore from: Add..., browse to .bak file (small - only 6mb) Now I am ready to restore the database, so I click OK and get the following error: "The media family on device 'E:...bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE DATABASE is terminating abnormally." This error is immediate. I have tried a few different variations of this - restoring the db to dev machine with a different db name and log file names (where it originated), creating an empty database with the same physical path to files before and trying to restore to that, making a few different .bak files and making sure they are verified before uploading them to prod. I know for a fact the directory for the .mdf and .ldf files exist on prod, though the files themselves don't exist. If, before I click OK to restore, go to the options tab instead I get the following error: Error 3241: The media family on device 'E:...bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE FILELIST is terminating abnormally. Anyone have any bright ideas?

    Read the article

  • RRAS VPN on windows 2k3 AD, can access rras server only.

    - by nopsax
    I'm setting up a test lab and here is the current configuration: 192.168.86.201 - a windows 2003 machine acting as PDC with AD/DNS/DHCP/WINS. 192.168.86.62 - windows 2003 machine is the RRAS server with IAS, also a file/print server. 192.168.86.6 - gateway/router to internet 192.168.86.21 - Windows XP Workstation Everything works on the internal network, File/Print/AD etc. Whenever a user connects via vpn to the RRAS server remotely using their domain credentials, they are assigned an ip address from the 192.168.86.201 machine along with the wins server address etc. The vpn user can then ping/access resources on the RRAS server, but cannot ping/access resources of any other machines by name or ip. However, if I ping by name, it does resolve to the correct ip address, just no replies. I did notice that on the RRAS server the 'internal' interface gets an ip address of 192.168.86.75 when a remote user connects, and the remote user is assigned, for example 192.168.86.71 . The RRAS server responds on both the .62 and .75 ip addresses. The client also unchecks the 'use remote default gateway option'. Also, I tried connecting a laptop to the physical network, joining the domain, then going remote and dialing the connection before domain login, and everything seems to work, e.g. browse-able shares via network neighborhood. But I can't really join the domain remotely if I cannot access any other resources. I really need to monitor traffic to see whats happening to those packets but won't be able to until this weekend. Any help is appreciated, will provide whatever configurations are needed.

    Read the article

  • Windows 7 Upgrade from an OEM disc

    - by user1026361
    I recently bought a new laptop with Windows 7 Home Premium and a Windows 7 Pro Upgrade key. It also comes with a cavalcade of bloatware. I would like to start with a fresh install of Windows. I understand that I can upgrade to W7Pro using the option in the Home Premium Start menu. However, I also own a Windows 7 Pro OEM installation disc from another computer. Can I use this Windows 7 Pro OEM disc to install Windows, and when it asks me for my key, provide the upgrade key purchased? Or am i required to install Home Premium first and then apply my upgrade key?

    Read the article

  • Disabling kextcache on 10.5.8 and 10.6.3

    - by Jeff Kelley
    We use Radmind to manage our Mac OS X loadsets and, as such, often run into difficulty when new OS releases come out due to, among other things, updated kernel extensions. The workflow in the past (OS revisions <= 10.4) was to delete the kernel extension cache, update the extensions, and then reboot. That worked just fine, as the system would re-create missing caches on boot. In Leopard, you need to delete the caches after replacing the kernel extensions with their new versions, as the system will automatically start creating them when you replace them; the only way to ensure that you don't have invalid extensions cached is to delete the cache before rebooting. I'm looking for a way to prevent the kernel extensions cache from being re-created until the next reboot. If you modify the contents of /System/Library/Extensions/, kextcache will start up automatically. I've looked through /System/Library/LaunchDaemons/ and other places, but I can't find whatever it is that's starting kextcache. Any ideas?

    Read the article

  • Weirdness After Reinstall The Windows Operating System

    - by Eka Anggraini
    I want to ask, and ask advice from you guys. I have reinstall my OS and successful, I'm turn off the restart a few times is fine .. Later a few hours later, when I turn it on, a sudden there was an error : Windows could not start because the following file is missing or corrupt: \WINDOWS\SYSTEM32\CONFIG\SYSTEM You can attempt to repair this file by starting Windows Setup using the original Setup CD-ROM Select 'R' at the first screen to start repair. So I intend to repair, reinstall all again .. I enter cd windows : press any key .. with blue background text below: Setup is loading files (windows executive) Setup is loading files (hardware abstraction layer) Then I was waiting until half an hour, and no changes, I repeated the process several times is also nil. Please advice and solutions about the problem where? Hardware / cd Windows ?

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >