Search Results

Search found 14236 results on 570 pages for 'times square'.

Page 414/570 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • SSH sometimes screws up connection when terminal overflows?

    - by SeveQ
    I've got a problem with SSH on a Debian Lenny based server (it's a vHost within a Xen environment, booted on a Xen kernel). I hope someone can help me with this. The SSH connection seems somehow getting screwed up frequently when the terminal overflows (new lines beyond the bottom of the terminal, usually forcing it to scroll). The connection gets lost but not regularly disconnected. It nearly always happens when I do the following: an existing SSH connection gets disconnected (regularly) I order putty to reestablish the connection login-prompt appears at the very bottom of the putty terminal window I enter my login-name, press the enter key I'm asked for the password, I enter it, press the enter key and BOOM! Nothing more happens. I have to reconnect again. So it is reproducable. I'm not totally sure if the connection crashes before or after I enter the password. Furthermore it also happens when there is much text to be displayed (for example when I compile something or do an ls -l on a directory with many entries). Using 'screen', however, helps to reduces the frequency of occurence but doesn't solve the problem completely. It's occurence is independent from which terminal software I use. I mostly use putty but it also happens with other clients. I certainly hope somebody can help me solving this problem. Thanks in advance! //edit: I've just made a Wireshark trace of the ssh connection and there is nothing, I repeat, nothing different between the working and the failing connection (at least aside from frame numbers, ports and times that obviously can't be equal). This leads me to the assumption that the error has to happen on the server's side.

    Read the article

  • Feasibility of Windows Server 2008 DFS replication over WAN link

    - by CesarGon
    We have just set up a WAN link that connects two buildings in our organisation. The link is provided by a 100-Mbps point to point line. We have a Windows Server 2008 R2 domain controller on each side of the link. Now we are planning to set up DFS for file services across the organisation. The estimated data volume is over 2 TB, and will grow at approximately 20% annually. My idea is to set up a file server in each building and install DFS so that all the contents stay replicated over the 100-Mbps link. I hope that this will ensure that any user will be directed to the closest (and fastest) server when requesting a file from the DFS folders. My concern is whether a 100-Mbps WAN link is good enough to guarantee DFS replication. I've no experience with DFS, so any solid advice is welcome. The line is reliable (i.e. it doesn't crash often) and our data transfer tests show that a 5 MB/sec transfer rate is easily achieved. This is approximately 40% of the nominal bandwidth. I am also concerned about the latency. I mean, how long will users need to wait to see one change on one side of the link after the change has been made on the other side. My questions are: Is this link between networks a reliable infrastructure on which to set up DFS replication? What latency times would be typical (seconds, minutes, hours, days)? Would you recommend that we go for DFS in this scenario, or is there a better alternative? Many thanks.

    Read the article

  • Keep permalinks in Wordpress

    - by Remus Rigo
    Is there any way to set my permalinks to keep their exact link. If I have a post like this one http://blog.rigo.ro/?p=11, then I would like that every time I edit the post to keep this link. I have installed the Revision Control plugin and I set it to do not keep revisions. Any ideea how to do this? I want to keep this format of links. Edit: I took a look again, the permalinks keep their links, but every time I edit it adds a new version to the database and the next post will have a higher number. If I edit my current post for 3 times (blog.rigo.ro/?p=11) the next post will be blog.rigo.ro/?p=14. Now, my question is how can I keep all my post and edits clean, one post/more edits = one entry in the database, so if I have. 10 post on my site and I edit them, I would like that my permalinks to be from 1 to 10. PS: I don't want to edit my database manually, is there any plugin to do this?

    Read the article

  • FTP FileWatcher

    - by Meiscooldude
    So, I am in this little predicament where I am stuck watching a few ftp folders to see if they have new files added to them. If they do, it needs to throw an event with the file name. Thereby telling something else to download that file. This is a pretty simple object to make, I was just curious if anyone knew how expensive this operation would be? I plan on using the command NLIST because I don't need file size information, and there will be no sub-directories in the folder. Each file in the folder will have exactly 25 characters in its name. There could be anywhere from 10 to 'maybe' a couple thousand (max around 2000) files per folder (usually on the lower end, 100-300, but currently growing). The files are anywhere from 250kb to a very VERY unlikely 10mb (usually within the 250kb to 4mb range). There possibly could be up to a few hundred folders (in which case I could change the watch frequency depending on number of folders), but currently there are only a few (6-10ish). There also would be multiple logins for the ftp server, different logins would have access to different folders. I am not asking for an implementation, just if anyone has some first or second hand knowledge about FTP, how could this affect my network. I am not opposed to putting in file retention times or change the frequency in which I check for new files.

    Read the article

  • Why *do* windows print queues occasionally choke on a print job

    - by Ian
    Y'know they way windows print queues will occasionally stop working with a print job at the head of the queue which just won't print and which you can't delete? Anyone know whats going on when this happens? I've been seeing this since the NT4 days and it still happens on 2008. I'm talking about standard IP connected laser printers - nothing fancy. I support a lot of servers and loads of workstations and see this happen a few times a year. The user will call saying they can't print. When you examine the print queue, which in my case will generally be a server based queue shared out to the workstations, you find a print job which you cannot cancel. You also can't pause it, reinitialize it, nothing. Stopping the spooler is the usual trick and works sometimes. However I occasionally see cases which even this doesn't cure and which a reboot is the only solution. Pause the queue, reboot, when it comes back up the job can then be deleted. Once gone the printer happily goes back to its normal state. No action is ever necessary on the printer. I regard having to reboot as last resort and don't like it. What on earth can be going on when stopping the process (spooler) and restarting it doesn't clear a problem? Its not linked to any manufacturer either. I've seen this on HPs, lexmark, canon, ricoh, on lasers, on plotters.... can't say I ever saw this on dot matrix. Anyone got any ideas as to what may be going on. Ian

    Read the article

  • Disable CTRL+mouse wheel zooming in Chrome?

    - by Peter Nore
    I'm a normal-sighted person and I would like to view pages at 100% all the time. I use keyboard shortcuts that involve CTRL a lot, so about twenty times a day I accidentally hit CTRL at the same time that I'm scrolling, which results in the page being reflowed and repainted. This in is annoying because it can take up to 30 seconds to fix the issue, depending on how complex the site layout is. On sites with dynamic layout such as Google Docs the problem is more serious; accidentally hitting CTRL+mouse wheel corrupts the display and forces me to refresh the page entirely, sometimes causing me to loose information in the process. I would like to either decouple CTRL+mouse wheel from zoom, or disable zoom functionality altogether. This is possible on Firefox by using about:config; is there a similar way to edit detailed settings in Chrome? Would I have access to the detailed settings if I used Chromium instead of Chrome? I'll probably jump ship back to Firefox if I can't solve this problem. There is a superuser question that asks basically the same thing I'm asking, but for Firefox and Internet Explorer exclusively. Other people on the Chrome forum have had related issues, but none have the same problem. "I would really like it if I could deactivate the auto zoom in/out." had "something with laptops and Windows 7", not the feature built into Chrome. Other people have had PDF specific issues, which doesn't concern me. I've also tried searching for extensions that allow you to disable the scroll; I had hoped that "Zoom Lock" would have the ability to lock the zoom at 100% and prevent CTRL+scroll wheel from distorting the display, but it doesn't work for my use case. Google Chrome version 9.0.597.84 (Official Build 72991) Operating System: Ubuntu 10.10

    Read the article

  • Why is crontab giving "No such file or directory" error when the file DOES exist?

    - by fettereddingoskidney
    I am getting the following three lines in an error message in /var/mail/username after the following job runs in crontab... 15 * * * * /Applications/MAMP/htdocs/iconimageryidx/includes/insertPropertyRESI.php Errors: /applications/mamp/htdocs/iconimageryidx/includes/insertpropertyRESI.php: line 1: ?php: No such file or directory /applications/mamp/htdocs/iconimageryidx/includes/insertpropertyRESI.php: line 3: syntax error near unexpected token `'initialize.php'' /applications/mamp/htdocs/iconimageryidx/includes/insertpropertyRESI.php: line 3: `require_once('initialize.php'); The PHP script I am trying to execute DOES in fact exist, and I have made absolutely sure the spelling is correct several times. I ran a crontab on another script before and it worked just fine...any ideas?? The 2nd & 3rd Errors are from line 3 in the following script (the one I am trying to run with the crontab): <?php require_once('initialize.php'); require_once('insertPropertyTypes.php'); $sDate; if(isset($_GET['startDate'])) { $sDate = $_GET['startDate']; } else { $sDate = ''; } $insertResi = new InsertPropertyTypes('Listing', $sDate, 'RESI'); ?> When I run my script insertPropertyRESI.php in the browser, it runs just fine???? Also, initialize.php and insertPropertyTypes.php are in the same directory as insertPropertyRESI.php I am using MAMP with PHP 5.3.5 thakns for the help :?

    Read the article

  • Resuming downloads in Firefox

    - by Kim
    Unfortunately, Firefox still has failed to add the option to resume downloads. I've ran into this problem SO MANY times, and in my previous searches I found posts saying Firefox was going to fix that. As of 3.6.3 they haven't. I just tried Free Download Manager (FDM), again, having the Firefox addon Flashgot use it. The download gets passed to FDM, and fails, giving the error message "access denied, invalid username or password." No password was required. The site I'm trying to get the file from is turbobit.net, which limits downloads speeds to 100kb/sec, and has a 59 second countdown before you get the link. I guess it's transparently using a password on their end. If I just download normally (save to disk) the download starts fine, but it fails after 30 minutes to 1 hour (always different), and my Wi-fi connection will stop briefly - and I have to start all over. So I will never be able to download a large file. I also tried DTA instead of FMD with Flashgot, and I get an "access denied" message in DTA. Again, I reloaded - waited the 59 seconds, and download w/Firefox, and the download starts fine. The failure message in the Firefox Downloads window is "source file at http... could not be read." Any help would be greatly appreciated. When is Firefox going to finally add the ability to resume downloads????? Is there some other software I haven't found using Google that will work?

    Read the article

  • AviSynth ChangeFPS: combining videos with different framerates

    - by Daniel Saner
    I have two video recordings of the same scene, but with different framerates, that I would like to combine using an AviSynth script. One video is recorded at 30fps, the other at 120fps. What I would like to do is to keep them temporally synchronised, meaning that for each frame of the 30fps video, the output should display 4 frames from the 120fps video. I would like the final output video to play at 30fps so that the duration is 4 times the original recordings. From AviSynth's documentation, it seems ChangeFPS is the function I'll need, since it removes and duplicates frames, while 'AssumeFPS' just changes the playback speed (and my plan is basically to quadruple every frame of the 30fps clip). However, the filter does not seem to do what it says. If I try: clip30 = AviSource("0326.avi").ChangeFPS(120) clip120 = AviSource("0326-120fps.avi") it doesn't affect the playback speed or frame count of the 30fps clip at all, but removes every fourth frame from the 120fps clip, which is not at all what I want. Unfortunately, appending .ChangeFPS(7.5) to the clip120 instead does not have the same inverse effect—in that case, it does exactly what's to be expected. Alternatively, if I try: clip30 = AviSource("0326.avi").AssumeFPS(7.5) clip120 = AviSource("0326-120fps.avi") there is no effect at all, both clips are played back at 30fps, meaning that only a quarter of the 120fps clip has been shown by the time the 30fps clip is over. So how can I combine these two clips in the manner I want? I was unable to find any other internal or external filters that would help me do that. It seems to me that if ChangeFPS did what the manual says, it would be the right one for the job.

    Read the article

  • Does a mini PCIe SSD fit into a Acer Aspire One?

    - by Narcolapser
    Question: What, if any, mini PCIe SSDs fit into the mini PCIe slot of the Acer Aspire one AOD250? Info: I have an Aspire One and I've been considering loading it with an SSD. The mini PCIe drives fascinate me and so I want to try that approach. Also they tend to be cheaper and not much slower. (at least not on Read time which matters more for a netbook) But I've heard that some times computers don't support certain mini PCIe cards. And I was wondering if anyone knew about the Aspire One? I tried asking Acer tech support, but they didn't know jack and spent the whole time informing that I would have to support my Ubuntu install on my own, which I was. Anyway. Rant Aside, I'm looking at this drive: http://www.newegg.com/Product/Product.aspx?Item=N82E16820183252 It states it is exclusively for the Eee PC. now does that mean It was designed for the Eee PC but will work in my netbook. or is something going to go wrong? (like right now my concern is it physically not fitting.) Any information would be appreciated. o7

    Read the article

  • Setting cfengine3 class based on command output

    - by gnomie
    This question is very similar to How can I use the output of a command in cfengine3 but the answer does not apply in my case I believe. I want to update a git repository via "git pull" and based on whether that lead to changes trigger some follow up action. Simplified, if there was something like "match output and set class" via some body if_output_matches I would want to use something like this: bundle agent updateRepo { commands: "/usr/bin/git pull" contain => setuidgiddir_sh("$(globals.user)","$(globals.group)","$(target)"), classes => if_output_matches("Already up-to-date.","no_update"); reports: no_update:: "nothing updated"; } body contain setuidgiddir_sh(owner,group,folder) { exec_owner => "$(owner)"; exec_group => "$(group)"; useshell => "true"; chdir => "$(folder)"; } So, is it possible to use the output of a - possibly expensive command - and base some decision on that? The execresult function is no good choice for me as a) the pull may become expensive at times (not recommended following the cfengine3 reference) and b) does not allow to specify user, group, working dir - which is important in my case. The repository is in user space and not owned by root.

    Read the article

  • Exclude list of specific files in wget

    - by nanker
    I am trying to download a lot of pages from a website on dial-up and it can be brutally slow. I have almost got the perfect wget command, but because I'm downloading pages from the same site wget wastes times downloading the same standard images for each page. If I know the name of the default page images, is there any way to have wget ignore and thus avoid downloading those for each and every page? Here is an example of one of the wget commands that my shell script generates into another shell script to download all of the pages: mkdir candy-canes-on-the-flannel-board-in-preschool cd candy-canes-on-the-flannel-board-in-preschool wget -p -nd -A jpg,html -k http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ wget -c --random-wait --timeout=30 --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3" http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ -O "candy-canes-on-the-flannel-board-in-preschool" rm Baby-and-Toddler.jpg Childrens-Books.jpg Creative-Art.jpg Felt-Fun.jpg Happy_Rainbow-e1338766526528.jpg index.html Language-and-Literacy.jpg Light-table-Button.jpg Math.jpg Outdoor-Play.jpg outer-jacket1-300x153.jpg preschoolspot-button-small.jpg robots.txt Science-and-Nature.jpg Signature-2.jpg Story-Telling.jpg Tags-on-Preschool.jpg Teaching-Two-and-Three-Year-olds.jpg cd ../ Now I realize the script is not likely as savvy as it could be but it is doing what I need at the moment except that you can see from the rm command that I would just like to prevent wget from downloading the files in the first place if possible. I almost forgot to mention, there are two wget commands and that is because the first one downloads the page as index.html and for some reason it does not open in my browser, however, when I open it and look at it in vim all of the page's content is there, so I am not sure why it does not open. But if I just issue the second wget command as it is then that page, same file really with an alternate name, opens up fine. Something that if I could fix would also help to streamline the process.

    Read the article

  • iptables captive portal remove user

    - by Burgos
    I followed this guide: http://aryo.info/labs/captive-portal-using-php-and-iptables.html I am implementing captive portal using iptables. I've setup web server and iptables on linux router, and everything is working as it should. I can allow user to access internet with sudo iptables -I internet -t mangle -m mac --mac-source USER_MAC_ADDRESS -j RETURN and I can remove access with sudo iptables -D internet -t mangle -m mac --mac-source USER_MAC_ADDRESS -j RETURN However, on removal, user can still open last viewed page as many times he wants (if he restart his Ethernet adapter, future connections will be closed). On blog page I found a script /usr/sbin/conntrack -L \ |grep $1 \ |grep ESTAB \ |grep 'dport=80' \ |awk \ "{ system(\"conntrack -D --orig-src $1 --orig-dst \" \ substr(\$6,5) \" -p tcp --orig-port-src \" substr(\$7,7) \" \ --orig-port-dst 80\"); }" Which should remove their "redirection" connection track, as it is written, but when I execute that script, nothing happens - user still have access to that page. When I execute /usr/sbin/conntrack -L | grep USER_IP after executing script I am having nothing returned, so my questions: Is there anything else that can help me clean these track? Obviously - I can't reset nor mine, nor users network adapter.

    Read the article

  • Apache22 on FreeBSD - Starts, does not respond to requests

    - by NuclearDog
    Hey folks! I'm running Apache 2.2.17 with the peruser MPM on FreeBSD 8.2-RC1 on Amazon's EC2 (so it's XEN). It was installed from ports. My problem is that, although Apache is running, listening for, and accepting connections, it doesn't actually respond to any or show them in the log at all. If I telnet to the port it's listening on and type out an HTTP request: GET / HTTP/1.1 Host: asdfasdf And hit enter a couple of times, it just sits there... Nothing. No response requesting with a browser either. There doesn't appear to be anything helpful in the error log: [Sun Jan 09 16:56:24 2011] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Sun Jan 09 16:56:25 2011] [notice] Digest: generating secret for digest authentication ... [Sun Jan 09 16:56:25 2011] [notice] Digest: done [Sun Jan 09 16:56:25 2011] [notice] Apache/2.2.17 (FreeBSD) mod_ssl/2.2.17 The access log stays empty: root:/var/log# wc httpd-access.log 0 0 0 httpd-access.log root:/var/log# I've tried with accf_http and accf_data both enabled and disabled, and with both the stock configuration and my customized config. I also tried uninstalling apache22-peruser-mpm and just installing straight apache22... Still no luck. I tried removing all of the LoadModule lines from httpd.conf and just re-enabled the ones that were necessary to parse the config. Ended up with only the following loaded: root:/usr/local/etc/apache22# /usr/local/sbin/apachectl -M Loaded Modules: core_module (static) mpm_peruser_module (static) http_module (static) so_module (static) authz_host_module (shared) log_config_module (shared) alias_module (shared) Syntax OK root:/usr/local/etc/apache22# Same results. Apache is definitely what's listening on port 80: root:/usr/local/etc/apache22# sockstat -4 | grep httpd root httpd 43789 3 tcp4 6 *:80 *:* root httpd 43789 4 tcp4 *:* *:* root:/usr/local/etc/apache22# And I know it's not a firewall issue as there is nothing running locally, and connecting from the local box to 127.0.0.1:80 results in the same issue. Does anyone have any idea what's going on? Why it would be doing this? I've exhausted all of my debugging expertise. :/ Thanks for any suggestions!

    Read the article

  • Timeout settings for Remote Desktop Sessions to lock

    - by atroon
    Our office uses a Windows 2003 server to provide access to an accounting application. Recently I was asked to increase the amount of time it takes for the session to lock itself and require the entry of the user's password to resume. That seems to be about ten minutes, at present. I am familiar with group policy and have tweaked those settings to scavenge sessions (and thereby licenses) from sessions that have been disconnected (by the user closing the mstsc.exe client or by a network issue). That's simple and straightforward. But I can't find anything in GP to allow a longer time period before the RDP client window goes black and then, when clicked upon, requires a username and password to resume the session. I must admit this would be nice personally as well, since most of my time is spent documenting the application and/or monitoring its database, so I usually have a window open to the terminal server along with the rest of the staff in the accounting center, but I interact with it very little. I usually enter my password 10-15 times per workday, but I'm pretty good at it by now. ;) So, can this timeout period be adjusted, or are we out of luck?

    Read the article

  • Google account gives ERR_SSL_BAD_RECORD_MAC_ALERT errors

    - by Kjensen
    A couple of days ago, I started being unable to connect to accounts.google.com, which handles logins to all kinds of google services. I get this error in Chrome: Error 126 (net::ERR_SSL_BAD_RECORD_MAC_ALERT): Unknown error. In IE I get this: I assume it is the same error, just wrapped up. I run Win8 RTM. On the SAME machine, using the same network card, in a VMWare workstation image running Win7, I am able to connect perfectly. On another of my machines on my network, I am also still able to connect with no problem. My girlfriend uses the same network and has also complained a couple of times about this error (google calendar) - but this is anecdotal, since her technical troubleshooting abilities stop at "xxxx is broken". Her machine runs Win7. ;) I have rebooted, cleared cookies, do not run any antivirus/firewall, have not changed network config. The first 3-4 days after installing Win8, I did not have any problems. I have also searched, and found a hint about enabling SSL2.0 in connection settings, which did not help. Anybody know something about this error and what I can do to fix it?

    Read the article

  • DLINK WBR-1310B Wireless Router seems to hang...

    - by Ira Baxter
    I have a brand new DLINK-1310B Wireless Router (box never before opened, although I bought it the neighborhood computer junk store). I am using it at home (and in fact am using it this instant from a wireless laptop). When operative, I can ping it at 192.168.0.1, and I can log into it from the PC attached to by LAN and from the wireless PC at //192.168.0.1. In the course of the day since I've installed, it seems to have locked up 3 times. Each time the symptoms are my web browser (or other internet service) stops with a "No internet connection" error. Attempts to contact the router via 192.168.0.1 get no reaction, from either the wireless laptop or from the hardwired PC sitting next to it. It doesn't respond to pings to that address either. Rebooting fixes it. Its brand new. I've seen discussion in other questions about aging cheap electronics. Its too new to be aged. Anyboyd else seen this behavior with a DLINK-1310? Or do I just need to exchange it for another and try again? (I hate rolling dice, I bought the DLINK becuase a previous Linksys died of apparant heating problems). Remarkably, nobody talks about how much software is in a router. Is the stuff just buggy?

    Read the article

  • Small TCP Window on WAN between 2 Locations

    - by Brent
    Site A: Denver datacenter. 60MBPS. Site B: Chicago. 100MBPS. ICMP pings: Packets: Sent = 176, Received = 176, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 74ms, Maximum = 94ms, Average = 75ms File transfer between sites that never goes past ~7MBPS: Windows Update download at 60MBPS+: Site to site: IPSec VPN using two Cisco 5520's. CPU at 3-4% and lots of memory to spare. The latency between to two sites is very acceptable so I can't see an issue why it is performing so slow when transferring between the two sites. I have found that any type of transfer (FTP, HTTP, Windows file shares) will never go above ~7MBPS. When the WAN was first setup, I was able to get transfers at 50-60MBPS, which is what is expected due to the WAN connection at the Site A at 60MBPS. Then a few days later, I was not able to get anything going faster than ~7MBPS. Is there a upstream router between Denver and Chicago causing this? I want to take the blame away from our setup as downloads from Windows Update go blazing fast and for the first few days after the site to site VPN came up, I was transferring VM images at 50-60MBPS. Our stack: HP P2000 MSA - HP C7000 Chassis - HP Flex-10 - Cisco Gigabit switch - Cisco ASA - WAN

    Read the article

  • Registry remotley hacked win 7 need help tracking the perp

    - by user577229
    I was writing some .VBS code at thhe office that would allow certain file extensions to be downloaded without a warning dialog on a w7x32 system. The system I was writing this on is in a lab on a segmented subnet. All web access is via a proxy server. The only means of accessing my machine is via the internet or from within the labs MSFT AD domain. While writing and testing my code I found a message of sorts. Upon refresing the registry to verify my code changed a dword, instead the message HELLO was written and visible in regedit where the dword value wass called for. I took a screen shot and proceeded to edit my code. This same weird behavior occurred last time I was writing registry code except on another internal server. I understand that remote registry access exists for windows systems. I will block this immediately once I return to the office. What I want to know is, can I trace who made this connection? How would I do this? I suspect the cause of this is the cause of other "odd" behaviors I'm experiencing at work such as losing control of my input director master control for over an hour and unchanged code that all of a sudden fails for no logical region. These failures occur at funny times, whenver I'm about to give a demonstration of my test code. I know this sounds crazy however knowledge of the registry component makes this believable. Once the registry can be accessed, the entire system is compromised. Any help or sanity checking is appreciated.

    Read the article

  • How to access remote network resource from local machine

    - by jerluc
    I just configured VPN access successfully so that I now can connect to my workstation at work from my personal Linux box at home. The problem is that all of my dev files for a server I'm locally running are on my personal box and cannot be transfered to my workstation (at least not in any timely manner over this connection given the amount of data, in addition to the many reconfigurations which would be required for the server to run even if I could somehow get the files across). So essentially, I am able to run my server locally on my personal computer, however, the data-sources required for the back-end are accessible only from within the office's network. But is there some way for me to somehow either access the data-sources directly through a VPN connection or even if I need to be a bit more convoluted by connecting via VPN to my workstation and then somehow connecting to the data-sources through my workstation to my personal computer? And here I could really care less about the speed of the connection from my server to the data-sources since they will probably only be fetched a few times every hour or so. Thanks! Sorry if this a stupid question and/or doesn't make any sense! (And sorry for anyone who read this at stackoverflow, I posted it in the wrong area.)

    Read the article

  • Internet Explorer 8 Loses Cookies

    - by Mikeon
    I'm running Windows 7 for some time now and use Internet Explorer 8 as my main browser. What I've noticed is that it "loses" cookies A LOT! I mean it! Typical situation: I log in into a side checking the remember me checkbox. I reboot the computer/restart the browser, go to the site, get logged in automatically - I'm happy. From time to time however, I'm asked for the credentials. Normal situation you would say. So would I if it didn't happen few times a week. Come on! On Internet Explorer 7 I didn't notice this as much. Cookies were lost once a quarter or so. Note that i was using IE7Pro with my IE - dunno however if it has anything to do with my current problem. Anyway I wonder if this behavior is "normal" or is it only me? more info for people that suggest it may be normal - cookie expiring and stuff. When it happens I loose all auth cookies - gmail, bloglines and whatnot!

    Read the article

  • Coffee spilled inside computer, damaged hard drive

    - by Harpreet
    Today coffee spilled over my table, and some of it (very less) reached the PC case placed under the table. I think little bit of it got inside the PC case through the front. As that happened the fan started running very fast and made noise. I tried to restart to see if it becomes fine, but the computer didn't start again. First it gave an error of "Alert! Air temperature sensor not detected" and didn't start. Next I tried again multiple times of starting the computer but then it gave some memory error. I was not able to start the computer. Incase there's a problem in hard disk or something related to memory, is there any way we can extract our work or data? I am scared if I am not able to extract my work in case some problem occurs like that. What options would I have? Help!! EDIT: I have attached the photo here and you can see the area spilt in red circle. The hard drive electronics have been affected and internal speaker may also have been affected. Any advise on cleaning and if hard drive can work? EDIT 2: Are there any professional services offered to extract data from blemished hard disk, like this one, in case I am not able to run it personally?

    Read the article

  • Ping and crawling not working, site still resolving

    - by Andrew Alexander
    Ok, so we're trying to figure out why the site of one of our clients isn't being crawled by Google (we've ruled out robots.txt or meta tags) When we go to the site, either IP address or domain name, the site resolves, everything works. However, Google is getting a 302 redirect (which it apparently isn't following for crawling), and when we ping the address, it times out (note, the site is still resolving in the browser throughout all of this). The site is built in ASP.Net (I assume C#) and so my thoughts were that it was an errant redirect rule, or some other sort of server side issue. We also thought that it might be due to incorrect domain pointing (but if we try to ping the IP, it doesn't work, so that sorta rules that out). We're really not sure what is causing all of these errors, or even if they have one single source. Anyone have any ideas what could be going on? Do you need any more information? To boil it down in a TL; dr: * Site resolving in browser, both IP and domain name. No problems here. * Site not being crawled by Google (gets a 302 it doesn't seem to follow) - it is not due to robots.txt or meta tags * Ping is not working for the IP address. This is very odd, because again, the IP address seems to work fine in the browser. * Our thoughts are either redirect rule issue, domain pointing issue, or possibly some errant code - or some combination of the three

    Read the article

  • Modify or disable Windows 8 swipe gestures on touchpad / laptop

    - by Matsemann
    I have an ASUS G75VW laptop with a Synaptic touchpad (/trackpad). When I move my finger from one edge towards the middle (the swipe), Windows 8 will bring up different stuff. This is a problem because the area where I can actually move the mouse with my finger is too small (or, I mostly use the top left of the touchpad). So I often end up doing a swipe and bringing up some menu, or to do the swipe so slow that no menu is appearing but the mouse pointer is also not moving when I move my finger. Quite annoying. When swiping from left edge it earlier swapped apps like crazy. I disabled that, so now it only brings up the same menu as pressing win+tab (or some times the charms bar, I never know which). I could change that by: Win+I - Change PC settings - General - When I swipe from the left edge, switch directly to my most recent app I've tried Mouse settings in Control Panel, driver settings for my touchpad and searching for swipe and gestures on my computer (which was what led me to the setting above) with no luck. How can I disable the swipe gestures, or change what they do? Thanks.

    Read the article

  • Find out when a system went down?

    - by Clinton Blackmore
    I have a Mac OS X 10.5 server, with a RAID set in it, that went down due to a power outage on Thursday, and the machine is not happily booting right now*. It is possible to find out when the machine went down, while not booted off the internal drive? (I'm booted off an external drive, waiting for the RAID sets to initialize.) Normally, I'd run last. The man page doesn't indicate that I can run it against a different startup volume. It looks possible to parse /var/log/utmpx, but I don't think it'd be worthwhile to try to do that from scratch for this one-off problem. * I'm still trying to figure out why it isn't happy, and may ask a follow-up question. Right now I can see that UserNotificationCenter crashed repeatedly early Thursday morning, and that securityd, mdworker, and ARDAgent crash shortly after startup [I think -- I want to verify when the box went up and down]. The login window does not come up right (I think it is crashing or not able to cope with a dead securityd). The box is supposed to be set to go down when the UPS tells it power is out; at the moment, I'm wondering if it went down, and turned back on multiple times! I sure hope not.

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >