Search Results

Search found 16899 results on 676 pages for 'local'.

Page 183/676 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • Sharepoint Server 2007 generates event log entry every 5 minutes - "The SSP Timer Job Distribution L

    - by Teevus
    I get the following error logged into the Event Log every 5 minutes: The SSP Timer Job Distribution List Import Job was not run. Reason: Logon failure: the user has not been granted the requested logon type at this computer In addition, OWSTimer.exe periodically gets into a state where its consuming almost all the CPU and only killing the process or restarting the Sharepoint services fixes it (although I'm not sure if this is a related or seperate issue). I have tried the following (based on various suggestions floating around the web), all to no avail: iisreset (no affect) Added the Sharepoint and Sharepoint Search service accounts to Log on as a batch job and Log on as a service policies in the Group Policies for the domain. I went into the Local Computer Policy on the Sharepoint server and verified that those policies had actually been applied Verified that the Sharepoint and Sharepoint Search service accounts are both in the WSS_WPG group Verified in dcomcnfg that the WSS_WPG group (and indeed the Sharepoint and Sharepoint search service accounts) has local activation rights for SPSearch. Any more suggestions would be valued. Thanks

    Read the article

  • SMTP for multiple domains on virtual interfaces

    - by Pawel Goscicki
    The setup is like this (Ubuntu 9.10): eth0: 1.1.1.1 name.isp.com eth0:0 2.2.2.2 example2.com eth0:1 3.3.3.3 example3.com example2.com and example3.com are web apps which need to send emails to their users. 2.2.2.2 points to example2.com and vice-versa (A/PTR). MX - Google. Google handles all incoming mail. 3.3.3.3 points to example3.com and vice-versa (A/PTR). MX - Google. Google handles all incoming mail. Requirements: Local delivery must be disabled (must deliver to MX specified server), so that the following works (note that there is no local user bob on the machine, but there is an existing bob email user): echo "Test" | mail -s "Test 6" [email protected] I need to be able to specify from which IP/domain name the email is delivered when sending an email. I fought with sendmail. With not much luck. Here's some debug info: sendmail -d0.12 -bt < /dev/null Canonical name: name.isp.com UUCP nodename: host a.k.a.: example2.com a.k.a.: example3.com ... Sendmail always uses canonical name (taken from eth0). I've found no way for it to select one of the UUCP codenames. It uses it for sending email: echo -e "To: [email protected]\nSubject: Test\nTest\n" | sendmail -bm -t -v [email protected]... Connecting to [127.0.0.1] via relay... 220 name.isp.com ESMTP Sendmail 8.14.3/8.14.3/Debian-9ubuntu1; Wed, 31 Mar 2010 16:33:55 +0200; (No UCE/UBE) logging access from: localhost(OK)-localhost [127.0.0.1] >>> EHLO name.isp.com I'm ok with other SMTP solutions. I've looked briefly at nbsmtp, msmtp and nullmailer but I'm not sure thay can deal with disabling local delivery and selecting different domains when sending emails. I also know about spoofing sender field by using mail -a "From: <[email protected]>" but it seems to be a half-solution (mails are still sent from isp.com domain instead of proper example2.com, so PTR records are unused and there's more risk of being flagged as spam/spammer).

    Read the article

  • VPN and Bonjour conflicting

    - by JW.
    Does anyone know why a VPN connection might interfere with Apple's Bonjour? I've noticed that my Mac and various iDevices have trouble finding each other on my local network, when I have VPN connections open. Things like Home Sharing and Wi-Fi Sync work some of the time, but sometimes fail to find the other device. The VPN connections are made using IPSecuritas, which is a GUI around raccoon. I have the local "endpoint mode" set to Host. Apple mentions that Home Sharing may conflict with VPNs, but they don't specify why, or how to fix it. I'm using a Mac with OS 10.7.3 and IPSecuritas to connect to the VPN, an iPhone, and an iPad.

    Read the article

  • windows service application run fine on windows XP but crashes on windows7

    - by Abbas Siddiqui
    I am sorry If my question asked before, I search extensively but didn't found. If present please post the link of that question. I have developed windows service that works fine on windows xp , when I installed it on windows7 it installed and works fine for few minutes, after that is crashes and gives the following error message. has stopped working windows is checking for the solution to the problem. the log entry is as follows Fault bucket 1155193276, type 5 Event Name: CLR20r3 Response: Not available Cab Id: 0 Problem signature: P1: windowsserviceapp.exe P2: 1.0.0.0 P3: 4bf29a85 P4: System.Windows.Forms P5: 2.0.0.0 P6: 4a275ebd P7: 16cf P8: 159 P9: System.ComponentModel.Win32 P10: Attached files: C:\Users\DELL\AppData\Local\Temp\WERF98D.tmp.WERInternalMetadata.xml These files may be available here: C:\Users\DELL\AppData\Local\Microsoft\Windows\WER\ReportArchive\AppCrash_windowsserviceap_89ea5da5168ff1535681aa613b5f7bf2b1636dc_111d24f1 Analysis symbol: Rechecking for solution: 0 Report Id: 24dc8c83-62a1-11df-b1ee-00271352d813

    Read the article

  • Windows 8 with LiveID login authenticates as Guest to remote SQl Server

    - by Tim Long
    I have a network where several users are using Office Accounting 2009 in multi-user client/server mode. OA is built on SQL Server. One PC acts as the 'server' and has the SQl Server instance, the others have only the application installed and no SQL instance, all of the apps connect remotely to the SQL instance on the 'server'. I'm using the term 'server' loosely here, it is just a normal workstation that happens to be designated as the server and runs the SQL instance. There is no NT domain, all user accounts are local accounts. The way that OA works in multi-user mode is that each user is required to have a local account with the same username and password on both the client and 'server' PCs. This has been working well, no along comes Windows 8. I use my 'Microsoft Account' aka LiveID to log into Windows 8. Office Accounting runs fine and attempts to connect to the database, but fails, 'you do not have permission to perform this operation'. In the SQL logs, I get this error: 2012-10-28 17:54:01.32 Logon Error: 18456, Severity: 14, State: 11. 2012-10-28 17:54:01.32 Logon Login failed for user 'SERVER\Guest'. Reason: Token-based server access validation failed with an infrastructure SERVER is the hostname of the server. So it seems to be authenticating as 'Guest'?? To verify this, I enabled the Guest account on the 'server' PC and then added Guest as an allowed user within Office Accounting (this simply creates the user in SQL and gives it an appropriate database role). Sure enough, My Windows 8 PC was then able to connect to the database when using Office Accounting. Clearly, having users authenticate as 'Guest' stinks from a security and auditing standpoint. So what I need are some ideas for how to work around this. I've tried switching the Windows 8 PC to a 'local account' and that works too, but requires giving up significant functionality on the Windows 8 PC. What I really need is a way to force the Windows 8 PC to use a specific set of credentials when connecting to the remote SQL instance. Office Accounting takes the logged in username, which is my LiveID and doesn't correspond to any Windows user name. Anyone solved this issue?

    Read the article

  • Where is a good place to start to learn about custom caching in .Net

    - by John
    I'm looking to make some performance enhancements to our site, but I'm not sure exactly where to begin. We have some custom object caching, but I think that we can do better. Our Business We aggregate news stories on a news type of web site. We get approximately 500-1000 new stories per week. We have index pages that show various lists of the items and details pages that show the individual stories. Our Current Use case: Getting an Individual Story User makes a request The Data Access Layer(DAL) checks to see if the item is in cache and if item is fresh (15 minutes). If the item is not in cache or is not fresh, retrieve the item from SQL Server, save to cache and return to user. Problems with this approach The pull nature of caching means that users have to pay the waiting cost every time that the cache is refreshed. Once a story is published, it changes infrequently and I think that we should replace the pull model with something better. My initial thoughts My initial thought is that stories should ALL be stored locally in some type of dictionary. (Cache or is there another, better way?). If the story is not found, then make a trip to the database, update the local dictionary and send the item back. Since there may be occasional updates to stories, this should be an entirely process from the user. I watched a video by Brent Ozar, How StackOverflow Scales SQL Server, in which Brent states "the fastest database query is the one that you don't make". Where do I start? At this point, I don't know exactly what the solution is. Is it caching? Is there a better way of using local storage? Do I use a Dictionary, OrderedDictionary, List ? It seems daunting and I'm just looking for some good starting points to learn more about how to do this type of optimization.

    Read the article

  • zip being too nice (Mac OS X)

    - by stib
    I use zip to do a regular backup of a local directory onto a remote machine. They don't believe in things like rsync here, so it's the best I can do (?). Here's the script I use echo $(date)>>~/backuplog.txt; if [[ -e /Volumes/backup/ ]]; then cd /Volumes/Non-RAID_Storage/; for file in projects/*; do nice -n 10 zip -vru9 /Volumes/backup/nonRaidStorage.backup.zip "$file" 2>&1 | grep -v "zip info: local extra (21 bytes)">>~/backuplog.txt; done; else echo "backup volume not mounted">>~/backuplog.txt; fi This all works fine, except that zip never uses much CPU, so it seems to be taking longer than it should. It never seems to get above 5%. I tried making it nice -20 but that didn't make any difference. Is it just the network or disc speeds bottlenecking the process or am I doing something wrong?

    Read the article

  • arm-none-ebai-gcc does not work from mounted directory

    - by dmytro_lviv
    I want to build project for ARM micro controller. For this purpose in folder with project was placed script, which download toolchain and build him. After run this script toolchain was placed in folder with project. Folder with project placed on another logical disk (which shared between Win and Linux) and this disk is mounting each time when I start develop. (Mount by hand). When I start make, in terminal I receive error: make[3]: arm-none-eabi-gcc: Command not found The output from echo $PATH: /mnt/Smoothie-master/gcc-arm-none-eabi/bin:/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games The output from whereis arm-none-ebai-gcc: arm-none-ebai-gcc: All binaries files, which are relating to this toolchain are placed in the next directory: /mnt/Smoothie-master/gcc-arm-none-eabi/bin/ and has permissions: "-rwxrwxrwx" Before building this toolchain, I had another similar toolchain (another version of this), but installed through apt-get. And it was removed through apt-get before building new. Where is the problem? Thanks!

    Read the article

  • Multiple Memcached server /etc/init.d startup script that works ?

    - by p4guru
    I install memcached server via source and can get standard start up script installed for 1 memcached server instance, but trying several scripts via google, can't find one that works to manager auto start up on boot for multiple memcached server instances. I've tried both these scripts and both don't work, service memcached start just returns to command prompt with no memcached server instances started lullabot.com/articles/installing-memcached-redhat-or-centos addmoremem.blogspot.com/2010/09/running-multiple-instances-of-memcached.html However this bash script works but doesn't start up memcached instances at start up though ? #!/bin/sh case "$1" in start) /usr/local/bin/memcached -d -m 16 -p 11211 -u nobody /usr/local/bin/memcached -d -m 16 -p 11212 -u nobody ;; stop) killall memcached ;; esac OS: Centos 5.5 64bit Memcached = v1.4.5 Memcache = v2.2.5 Anyone can point me to a working /etc/init.d/ startup script to manage multiple memcached servers ? Thanks

    Read the article

  • Post compiled php 5.4 curl installation

    - by user140657
    I recently compiled php 5.4 from source. I have Centos 6. I used this configuration: # ./configure --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql # make # make install # cp php.ini-dist /usr/local/lib/php.ini I realize now that I do not have cURL installed. I don't know how to install cURL after a compiled installation of php. Using yum install php-curl installs cURL for php 5.3. I tried this already with an apache restart and it did not show up on my phpinfo file. How do I install cURL under these circumstances?

    Read the article

  • failing to achive tunneling to fresh ubuntu 10.04 server

    - by user65297
    I've just set up a new 10.04 server and can't get the tunneling to work. local machine > ssh -L 9090:localhost:9090 [email protected] login success, but thereafter trying tunnel from local browser, http://127.0.0.1:9090 echo at server terminal: channel 3: open failed: connect failed: Connection refused auth.log sshd[24502]: error: connect_to localhost port 9090: failed. iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Trying 9090 at server (links http://xx.xxx.xx.xx:9090 works) sshd_config is identical to previous 8.04 server, working fine. What's going on? Thankful for any input. Regards, //t

    Read the article

  • Speed up loading of test results from builds in Visual Studio

    - by Jakob Ehn
    I still see people complaining about the long time it takes to load test results from a TFS build in Visual Studio. And they make a valid point, it does take a very long time to load the test results, even for a small number of tests. The reason for this is that the test results is not just the result of the test run but also all the binaries that were part of the test run. This often also means that the debug symbols (*.pdb) will be downloaded to your local machine. This reason for this behaviour is that it letsyou re-run the tests locally. However, most of the times this is not what the developer will do, they just want to know which tests failed and why. They can then fix the tests and rerun them locally. It turns out there is a way to load only the test results, which is much faster. The only tricky bit is to find the location of the .trx file that is generated during the build. Particularly in TFS 2010 where you often have multiple build agents, which of corse results in different paths to the trx file. Note: To use this you must have read permission to the build folder on the build agent where the build was executed. Open the build result for the build Click View Log Locate the part where MSTest is invoked. When using test containers, it looks like this:   Note: You can actually search in the log window, press Ctrl+F and you will get a little search box at the bottom. Nice! On the MSTest command line call, locate the /resultsfileroot parameter, which points to the folder where the test results are stored Note that this path is local for the build server, so you need to replace the drive letter with the server name: D:\Builds\Project\TestResults to \Project\TestResults">\\<BuildServer>\Project\TestResults Double-click on the .trx file and you will notice that it loads much faster compared to opening it from the build log window

    Read the article

  • meld on OS X 10.7 doesn't work?

    - by klm123
    I'm installing meld on Mac OS 10.7 using port. It has downloaded all dependencies and told that everything is ok: Staging meld into destroot Installing meld @1.5.3_0 Activating meld @1.5.3_0 Cleaning meld Updating database of binaries: 100.0% Scanning binaries for linking errors: 100.0% No broken files found. but when I run: [18:28:24]~$ meld Traceback (most recent call last): File "/opt/local/bin/meld", line 75, in <module> locale.setlocale(locale.LC_ALL,'') File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/locale.py", line 539, in setlocale return _setlocale(category, locale) locale.Error: unsupported locale setting what is the problem and how to deal with it?

    Read the article

  • ubuntu server 10 - slow and can't remove desktop environment

    - by Alex
    i'm running ubuntu server 10.10 with the desktop environment. simple page requests are taking over 5 seconds even when connecting to the server through our local network. i believe this is partially related to having the desktop environment installed, as the server worked faster (but not as fast as it should considering that it's on the local network), but tasksel fails every time (aptitude failed 100). my knowledge of networking and linux in general is limited. would really appreciate ideas on how i can troubleshoot this problem. oh also, in the system monitor, one of the processors is almost always around 100%. i doubt this is normal too....

    Read the article

  • How to use CLEAR USB internet connection in Ubuntu (host) and WindowsXP (guest) using VirtualBox

    - by bithacker
    I'm trying to use CLEAR Motorola WiMax USB in Ubuntu as there is no support for linux as yet. I've installed windowsxp as guest in ubuntu and the version I'm using is 3.2.2. USB is connecting fine in WindowsXP but I can't use internet in Ubuntu. Can you please tell me how to do it. Here is the configuration that could help you guys. Thanks in advance. I'm using Two Network Adapters. Network Adapter 1: PCnet-FAST III (NAT) Adapter 2: PCnet-FAST III (Host-only adapter, 'vboxnet0') ipconfig [on Guest windowsXP] Windows IP Configuration Ethernet adapter Local Area Connection: PCnet-FAST III (NAT) Connection-specific DNS Suffix . : IP Address. . . . . . . . . . . . : 10.0.2.15 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 10.0.2.2 Ethernet adapter Local Area Connection 3: PCnet-FAST III (Host-only adapter, 'vboxnet0') Connection-specific DNS Suffix . : IP Address. . . . . . . . . . . . : 192.168.56.101 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter Local Area Connection 2: Connection-specific DNS Suffix . : CLEAR Motorola USB IP Address. . . . . . . . . . . . : 10.168.242.33 Subnet Mask . . . . . . . . . . . : 255.255.192.0 Default Gateway . . . . . . . . . : 10.168.192.2 IFCONFIG [on Host Ubuntu] (Ethernet) eth0 Link encap:Ethernet HWaddr 00:14:22:b9:9d:76 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 eth1 (Wireless) Link encap:Ethernet HWaddr 00:13:ce:f0:9b:0d inet6 addr: fe80::213:ceff:fef0:9b0d/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:5 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:84 (84.0 B) Interrupt:17 Base address:0xe000 Memory:dfcff000-dfcfffff lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2292 errors:0 dropped:0 overruns:0 frame:0 TX packets:2292 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:171952 (171.9 KB) TX bytes:171952 (171.9 KB) vboxnet0 Link encap:Ethernet HWaddr 0a:00:27:00:00:00 inet addr:192.168.56.1 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::800:27ff:fe00:0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:137 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:21174 (21.1 KB)

    Read the article

  • How to configure fastcgi with lighttpd

    - by silverburgh
    Hi, I am trying to configure FastCgi with ligttpd server. I was able to run vanilla lighttpd like this: ./lighttpd -f lighttpd.conf And then I compile/install the source of fastcgi, and I add the following in my lighttpd.conf: fastcgi.server = ( "/fastcgi_scripts/" => (( #"host" => "127.0.0.1", #"port" => 9091, "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", "docroot" => "/" # remote server may use # it's own docroot )) ) But lighttpd won't start after I add the above. Can you please tell me how can I run fastcgi with lighttpd? I want to use a c program with fastcgi with lighttpd. Thank you.

    Read the article

  • PHP Kohana CentOS 5

    - by Undefined
    Trying to deploy a Kohana based project in CentOS 5. Installed PHP 5.3.1 but still getting the following error. Warning: preg_match() [function.preg-match]: Compilation failed: this version of PCRE is not compiled with PCRE_UTF8 support at offset 0 in /usr/local/apache2/htdocs/icarus/system/core/utf8.php on line 30 Fatal error: PCRE has not been compiled with UTF-8 support. See PCRE Pattern Modifiers for more information. This application cannot be run without UTF-8 support. in /usr/local/apache2/htdocs/icarus/system/core/utf8.php on line 38 Trying since last 2 days, i upgraded my PHP from 5.1 to 5.3 but still getting the same error.The problem as per me is that the PCRE module of PHP in phpinfo() says is of sep 2004. Below is the actual line PCRE Library Version 5.0 13-Sep-2004 Can anyone tell me how to upgrade it or wats the solution to the problem. Thanks.

    Read the article

  • IIS: changing site's home directory while site is running

    - by Jeff Stewart
    I'm trying to understand exactly what IIS 6.0 (on Windows Server 2003) does when I change the "Local Path" of a web site's Home Directory while the site is running. (Specifically with regard to ASP.NET applications.) I'm trying to build support for or against this practice in a deployment scenario: e.g. deploy the new code alongside the old code, then simply switch the IIS web site's local path to the folder containing the new code. IIS seems to handle this gracefully, but I notice that w3wp.exe still keeps some handles on the old code folder after the change. That's strange to me, because I would have expected IIS to recycle the application pool if this happened. Is this safe? Is the behavior well-defined?

    Read the article

  • How to execute files on LAN drives in a Windows Domain

    - by matnagel
    We have a very small LAN here, but some peolpe here think we need Active Directory, though nobody knows how to maintain it. I am not in the position to change this. How can I get full access (on Linux it would be "execute" rights) also for files on network drives (the files are just on another machine next room) My account is in the group Administrators on a windows 2003 server Domain Controller. I cannot open simple MS Access 2000 Databases or CHM Files from network drives in the lan How to do that? Some policy setting? I want to change that once. It is useless. We have no distinction between local or network files here. I would have to copy everything to a local drive and then do what I want.

    Read the article

  • VMWare Fusion cannot connect to the NAT connection on my Mac

    - by FFish
    I have been using VMWare Fusion on my Mac to check out my websites on localhost. Now I can't connect anymore with the NAT connection. There seems to be a problem with my IP address or Mac address? I have no idea what causes this, it was working fine before!? In the XP (SP2) VM, in the taskbar I see the Local Area Connection with the yellow warning icon. The bubble says: "This connection has limited or no connectivity. You might not be aisle to access the Internet or some network resources. For more information, click this message." Doing that opens up the Local Area Connection Status panel. In the Support tab, when I click the repair button I get following message: "Windows could not finish repairing the problem because the following action cannot be completed: Renewing IP address." Any help would be greatly appreciated.

    Read the article

  • "Recipient address rejected" when sending an email to an external address with sendgrid

    - by WJB
    In postfix, I'm using relay_host to send an email to an external address using sendgrid, but I get an error about local ricipient table when sending an email from my PHP code. This is my main.cf in /postfix/ ## -- Sendgrid smtp_sasl_auth_enable = yes smtp_sasl_password_maps = static:username:password smtp_sasl_security_options = noanonymous smtp_tls_security_level = may header_size_limit = 4096000 relayhost = [smtp.sendgrid.net]:587 This is the error message from the log: postfix/smtpd[53598]: [ID 197553 mail.info] NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 550 5.1.1 Recipient address rejected: User unknown in local recipient table; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<localhost.localdomain> One interesting thing is when I use "sendmail [email protected]" from the command line, the email is delivered successfully using SendGrid. I think it's because this uses postfix/smtp instead of postfix/smtpD the log for this says, postfix/smtp[18670]: [ID 197553 mail.info] AAF7313A7E: to=, relay=smtp.sendgrid.net[50.97.69.148]:587, delay=4.1, delays=3.5/0.02/0.44/0.18, dsn=2.0.0, status=sent (250 Delivery in progress) Thank you

    Read the article

  • How can I set Windows Server 2008 juntion points to Every Read Allow?

    - by brendan
    We have apps running on Windows Server 2003 and now 2008 as well. Unfortunately some of our code relies on inspecting the Documents and Settings directory which no longer exists in Windows 2003. It looks like there are "Junction Points" set up for backwards compatibility - http://msdn.microsoft.com/en-us/library/bb756982.aspx. But it seems like nothing I can do can get me access. I basically need to be able to call from a command line on both 2003 and 2008: C:\Documents and Settings\Administrator\Local Settings\Application Data\Google\Chrome\Application\chrome.exe Which translates in Windows 2008 to : C:\Users\Administrator\AppData\Local\Google\Chrome\Application\chrome.exe I've tried setting up my own "Documents and Settings" folder in 2008 but it won't let me as that seems to be reserved for these Juntion Points.

    Read the article

  • Killing Self-resurrecting Files

    - by Zian Choy
    Local computer: Dell desktop Windows XP Named "5" Server: Windows Server 2008 Named "W" Whenever I delete a file, log out, and log in, the file reappears. The files all supposedly live on the server (e.g. "\W\Home$\zchoy\Desktop") but even when the files are deleted from the server and the local computer, they come back. I've already tried resetting the offline files cache. I tried deleting a file and then synchronizing with the server. The file didn't come back. However, as mentioned earlier, the file comes back once I log out and log in.

    Read the article

  • Connection timed out exception, why?

    - by Dheeraj Kumar Aggarwal
    I am developing an application which uses embedded tomcat server 7, and deploys a web application on embedded server. My application accesses the embedded webapp through Rest APIs, but my clients are getting Connection Timed Out exceptions and port is also not blocked. I never gets this exception when I install this application on my local machine. Some points: IP address is used in the host name part (They are able to access this IP address on other port) Port is not blocked We are using Apache HttpClient library to access the URL Timeout interval seems not to be an issue. What are the possible reasons for this exception Connection Timed Out? or How can I simulate this problem on my local machine? Any pointers would be helpful.

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >