Search Results

Search found 11990 results on 480 pages for 'non deterministic'.

Page 356/480 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • IPtables - Accept Arbitrary Packets

    - by Asad Moeen
    I've achieved a lot on blocking attacks on GameServers but I'm stuck on something. I've blocked major requests of game-server which it aceepts in the form "\xff\xff\xff\xff" which can be followed by the actual queries like get status or get info to make something like "\xff\xff\xff\xff getstatus " but I see other queries if sent to the game-server will cause it to reply with a "disconnect" packet with the same rate as input so if the input rate is high then the high output of "disconnect" might give lag to the server. Hence I want to block all queries except the ones actual clients use which I suppose are in the form "\xff\xff\xff\xff" or .... so, I tried using this rule : -A INPUT -p udp -m udp -m u32 ! --u32 0x1c=0xffffffff -j ACCEPT -A INPUT -p udp -m udp -m recent --set --name Total --rsource -A INPUT -p udp -m udp -m recent --update --seconds 1 --hitcount 20 --name Total --rsource -j DROP Now where the rule does accept the clients but it only blocks requests in the form "\xff\xff\xff\xff getstatus " ( by which GameServer replies with status ) and not just "getstatus " ( by which GameServer replies with disconnect packet ). So I suppose the accept rule is accepting the simple "string" as well. I actually want it to also block the non-(\xff) queries. So how do I modify the rule?

    Read the article

  • Simple Linux program that takes any HTTP/HTTPS request and returns a single page?

    - by ultrasawblade
    I have a Linux box operating as router. There's a NIC that's connected to the internet (WAN), a NIC connected to an 8-port GbE switch (LAN), and a NIC connected to a Linksys wireless N-router (WLAN). Routing between everything is working perfectly. I have security completely disabled on the wireless router, but the WLAN NIC is firewalled such that it will only accept DNS queries and PPTP VPN connections. Currently HTTP/HTTPS traffic and everything else is blocked. I would like to run something that listens on port 80/443 of the WLAN NIC, and, for non VPN'ed connections, given any HTTP/HTTPS request it will return a single webpage saying "Unauthenticated" and explain how to sign into the VPN. A transparent proxy seems to be what I need, but my searches all seem to direct me to Squid, which is already running on my server and seems overkill for this simple task. Is there a simpler, lightweight program out there that does just this or should I just suck it up and run two instances of Squid (or figure out how to configure it)? Or, is this entire VPN thing I'm doing complete nonsense and I should just enable encryption on the wireless router?

    Read the article

  • Anti Virus Service does not run - Windows XP SP3 32bit Home

    - by Stefan Fassel
    I have a somewhat strange problem here. I am trying to run Anti Virus Software on my Windows XP Home 32bit System. After a serious crash I had to fall back to an outdated copy of my initial installation and had Windows install 5 years of updates. So far so good. After Intalling a new Anti Virus Software (Bitdefender 2012) everything seemed to be fine, initial scanning went fine and configuration was working. But after restarting the System the Virus Scanner was unable to start up again. Even the Configuration console of the AV Software did not start. I tried scanning the System for malware, but nothing was found. Then I tried a different AV Software (MS Security Essentials), but in the end it did fail to start too. I have tried to start the Service manually, but I seem to be missing the privilege to do so. I am logged in as a Non-"Administrator" User with Admin privileges (Not much choices there on a XP Home System). I cannot switch to Administrator account outside the protected mode. When running Windows in protected mode I am unable to start the AV Software because it does not run in protected mode. I am a bit at loss now...

    Read the article

  • Search inside of text files

    - by Matt
    So here is the situation. I currently run a mail server for my small non-profit company. My mail server (Merak Mail Server) keeps logs in .log files and mail as .tmp files. Essentially these are just text files that are kept on the server. Problem is that when I put text into the "Containing text" field on Windows Explorer, it always misses the files and tells me no results returned. Then when I search the files one by one (painful at best), I find the files I need. Do I not understand the search feature well enough, or maybe I have it indexing wrong. I really don't care what I need to use to search the files, even a third-party app is fine with me, I just want to type an email address into a box and search all of my log files or email files and find out which one I am looking for. It can be Windows Search or something else, as long as I can find a way to get the job done I will be happy. Pay solutions are fine as well. Thanks everyone in advance.

    Read the article

  • Postfix misconfigured? 550 Sender rejected from recieving server

    - by wnstnsmth
    We use Postfix on our CentOS 6 machine, having the following configuration. We use PHP's mail() function to send rudimentary password reset emails, but there is a problem. As you will see, mydomain and myhostname is correctly set, afaik. alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost mydomain = ***.ch myhostname = test.***.ch newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 Now this is the stuff that is in the /var/log/maillog of Postfix upon sending an email to ***.***@***.ch, with ***.ch being the same domain our sending server test.***.ch is on: Dec 13 16:55:06 R12X0210 postfix/pickup[6831]: E6D6311406AB: uid=48 from=<apache> Dec 13 16:55:06 R12X0210 postfix/cleanup[6839]: E6D6311406AB: message-id=<20121213155506.E6D6311406AB@test.***.ch> Dec 13 16:55:07 R12X0210 postfix/qmgr[6832]: E6D6311406AB: from=<apache@test.***.ch>, size=1276, nrcpt=1 (queue active) Dec 13 16:55:52 R12X0210 postfix/smtp[6841]: E6D6311406AB: to=<***.***@***.ch>, relay=mail.***.ch[**.**.249.3]:25, delay=46, delays=0.18/0/21/24, dsn=5.0.0, status=bounced (host mail.***.ch[**.**.249.3] said: 550 Sender Rejected (in reply to RCPT TO command)) Dec 13 16:55:52 R12X0210 postfix/cleanup[6839]: 8562C11406AC: message-id=<20121213155552.8562C11406AC@test.***.ch> Dec 13 16:55:52 R12X0210 postfix/bounce[6848]: E6D6311406AB: sender non-delivery notification: 8562C11406AC Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: 8562C11406AC: from=<>, size=3065, nrcpt=1 (queue active) Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: E6D6311406AB: removed Dec 13 16:55:52 R12X0210 postfix/local[6850]: 8562C11406AC: to=<root@test.***.ch>, orig_to=<apache@test.***.ch>, relay=local, delay=0.13, delays=0.07/0/0/0.05, dsn=2.0.0, status=sent (delivered to mailbox) Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: 8562C11406AC: removed So the receiving server rejects the sender (line 4 of log output). We have tested it with one other recipient and it worked, so this problem might be completely unrelated to our settings, but related to the recipient. Still, with this question, I want to make sure we're not making an obvious misconfiguration on our side.

    Read the article

  • how to setup .ssh directory inside an encrypted volume on Mac OSX and still have public key logins?

    - by Vitaly Kushner
    I have my .ssh directory inside an encrypted sparse image. i.e. ~/.ssh is a symlink to /Volumes/VolumeName/.ssh The problem is that when I try to ssh into that machine using a public key I see the following error message in /var/log/secure.log: Authentication refused: bad ownership or modes for directory /Volumes Any way to solve this in a clean way? Update: The permissions on ~/.ssh and authorized_keys are right: > ls -ld ~ drwxr-xr-x+ 77 vitaly staff 2618 Mar 16 08:22 /Users/vitaly/ > ls -l ~/.ssh lrwxr-xr-x 1 vitaly staff 22 Mar 15 23:48 /Users/vitaly/.ssh@ -> /Volumes/Astrails/.ssh > ls -ld /Volumes/Astrails/.ssh drwx------ 3 vitaly staff 646 Mar 15 23:46 /Volumes/Astrails/.ssh/ > ls -ld /Volumes/Astrails/ drwx--x--x@ 18 vitaly staff 1360 Jan 12 22:05 /Volumes/Astrails// > ls -ld /Volumes/ drwxrwxrwt@ 5 root admin 170 Mar 15 20:38 /Volumes// error message sats the problem is with /Volumes, but I don't see the problem. Yes it is o+w but it is also +t which should be ok but apparently isn't. The problem is I can't change /Volumes permissions (or rather shouldn't) but I do want public key login to work. First I thought of mounting the image on other place then /Volumes, but it is automaunted on login by standard OSX mounting. I asked about it here: How to change disk image's default mount directory on osx The only answer I got is "you can't" ;) I could hack my way around, by writing some shellscript that will manually mounting volume at a non-standard location but it would be a gross hack, I'm still looking for a cleaner way to do what I need.

    Read the article

  • Disadvantages of enabling 'Low Fragmentation Heap' LFH on Windows Server 2003?

    - by James Wiseman
    I've been investigating an issue with a production Classic ASP website running on IIS6 which seems indicative of memory fragmentation. One of the suggestions of how to ameliorate this came from Stackoverflow: How can I find why some classic asp pages randomly take a real long time to execute?. It suggested flipping a setting in the site's global.asa file to 'turn on' Low Fragmentation Heap (LFH). The following code (with a registered version of the accompanying DLL) did the trick. Set LFHObj=CreateObject("TURNONLFH.ObjTurnOnLFH") LFHObj.TurnOnLFH() application("TurnOnLFHResult")=CStr(LFHObj.TurnOnLFHResult) (Really the code isn't that important to the question). An author of a linked post reported a seemingly magic resolution to this issue, and, reading around a little more, I discovered that this setting is enabled by default on Windows Server 2008. So, naturally, this left me a little concerned: Why is this setting not enabled by default on 2003, or If it works in 2008 why have Microsoft not issued a patch to enable it by default on 2003? I suspect the answer to the above is the same for both (if there is one). Obviously, we're testing it in a non-production environment, and doing an array of metrics and comparisons to deem if it does help us. But aside from this I'm really just trying to understand if there's any technical reason why we should do this, or if there are any gotchas that we need to be aware of.

    Read the article

  • Debugging Windows PC freeze

    - by Violet Giraffe
    I have a problem with my computer, would appreciate any hints/ideas. It usually begins not immediately after booting Windows, but at some unpredictable point in time, which doesn't seem to correlate with any specific actions of mine. First sign of a problem is process System starting to consume 25% CPU time steadily. I have a quad-core CPU, so it might be one thread working non-stop. At this point micro-freezes start to occur - screen stops refreshing, but if I have, say, music player running - it continues playing. If I try to do something between the freezes, like open Start menu, it will freeze completely and forever. If I press reset button the PC will shut down and then start cold, as opposed to usual reset behavior (which doesn't include PC shutting down). I have noticed that full restart upon reset is usual for hardware problems, but I think this problem isn't related to at least motherboard-CPU-RAM-videoadapter. It certainly isn't caused by overheating. One very important not is that it seems to be related to Windows hosted WLAN network: I have USB Wi-Fi dongle and have configured a hosted network to share cable Internet connection with Wi-Fi devices. I am not 100% certain there's a strong connection, but in 9 or 10 cases when I enabled the network (by executing netsh wlan start hostednetwork), it did freeze eventually (sometimes within minutes of starting the network, sometimes within hours), and on at least 10 days when I didn't start the network it never froze, no matter how I used the computer). There are no critical/error entries in the events log that I can suspect as being related, only regular stuff like "driver not loaded". I have found no critical/error events that are being logged around the time of freeze occurring and are not logged during normal boot without starting the WLAN.

    Read the article

  • My Windows 7 has suddenly stopped displaying Unicode symbols

    - by Felix Dombek
    For some strange reason, my computer suddenly doesn't show certain unicode characters anymore! I have no idea what happened. Affected applications include Windows Explorer (should be Japanese characters), Google Chrome (should be a heart), and Winamp (should be stars): Russian, German etc. characters are displayed normally. Chrome also displays Japanese script on websites, but not in the GUI. How can I fix it? Update: I have tried to use System Restore to fix it. I needed to go back in time quite a while because the most recent restore points didn't solve it so I used one from the middle of November. After that restore, Unicode symbols were displayed again. Then I updated my system with Windows Update again because those were removed during the restore. After that, the error occurred again! I then did a restore to a point before my new updates, but the error persists, and the old restore point (which I used before) is gone and there are currently no other snapshots of the system. Any suggestions on what to do now? Update 2: I could find a workaround: Control Panel ? Region and Language ? Administration ? Change Language for Unicode-incompatible programs to Japanese (Japan). All mentioned programs display their symbols correctly again. However, I don't consider this a fix because these programs are not usually Unicode-incompatible, and it also leads to some (non-serious) artifacts in some programs. I still welcome an answer that tells me what went wrong here and how to fix the issue.

    Read the article

  • How many reverse proxies (nginx, haproxy) is too many?

    - by Alysum
    I'm setting up a HA (high availability) cluster using nginx, haproxy & apache. I've been reading great things about nginx and haproxy. People tend to choose one or the other but I like both. Haproxy is more flexible for load balancing than nginx's simple round robin (even with the upstream-fair patch). But I'd like to keep nginx for redirecting non-https to https among other things right at the point of entry to the cluster. On the other hand, nginx is a lot faster for serving static contents and would reduce the load on the powerful apache which loves to eat a lot of RAM! Here is my planned setup: Load balancer: nginx listens on port 80/443 and proxy_forwards to haproxy on 8080 on the same server to load balance between the multiple nodes. Nodes: nginx on the node listens to requests coming from haproxy on 8080, if the content is static, serve it. But if it's a backend script (in my case PHP), proxy forward to apache2 on the same node server listenning on a different port number. Technically this setup works but my concerns are whether having the requests going through several proxies is going to slow down requests? Most of the requests will be PHP requests as the backends are services (which means groing from nginx - haproxy - nginx - apache). Thoughts? Cheers

    Read the article

  • LogMeIn style remote access to NAS drive

    - by Mere Development
    I've been asked to setup some remote access to a NAS drive. The NAS drive will sit on a VLAN inside a network that uses a Cisco 891 IS router as gateway. The charity have no SSL-VPN licenses for the Cisco. At present there are no open ports or services on the Cisco itself and ideally we would like to keep it that way for a while, hence the request for a LogMeIn style service that's initiated from inside. We need multiple user access, about 10 max. Using LogMeIn on a machine connected to the NAS would only provide screen sharing I believe, and no concurrent connections (could be wrong?) The end users need to be able to read and write files to the NAS from Mac's and PC's around the globe. Read-only access from Mobile devices would be a bonus but not absolutely necessary. This is for a charity, non-commercial, but they are willing to spend if necessary. Cisco config knowledge is at a minimum so if I can avoid upsetting that delicate device I'll be happy :) Anyone have any clever ideas? I can provide more information on request. Thanks, Ben

    Read the article

  • Anonymous access to SMB share hosted on Server 2008 R2 Enterprise

    - by bwerks
    Hi all, First off, I have read through this post and a whole slew of non-SF posts which seem to address the same or a similar problem, however I was still unable to fix my problem. I've got three machines in this situation: a domain-joined server that runs Server 2008 R2 Enterprise ("share server") a domain-joined workstation running XP Pro SP3 ("test server") a domain-unjoined test server running Server 2003 R2 SP2 ("workstation") The share server is exposing a share on the network that the test server must access--it's a Source/Symbol Server share for our debugging purposes. I believe visual studio simply accesses the the share with its own credentials in this case, meaning that the share must be accessible anonymously since the test server isn't joined to the domain and there's no opportunity to supply domain authentication. I've attempted a lot of things to avoid the authentication window when accessing the share: I've enabled the Guest account on the share server and given Guest full sharing/NTFS permissions for the share. I've given ANONYMOUS LOGON full sharing/NTFS permissions for the share. I've added my share to “Network Access: Shares that can be accessed anonymously” in LSP. I've disabled “Network access: Restrict anonymous access to Named Pipes and Shares” in LSP. I've enabled “Network access: Let Everyone permissions apply to anonymous users” in LSP. Added ANONYMOUS LOGON to “Access this computer from the network” in LSP. Added the Guest account to “Access this computer from the network” in LSP. Attempted to provision the share using the Share and Storage Management MMC snap-in. Unfortunately when I attempt to access the share from the test server, I still see the prompt and I'm forced to enter "Guest" manually. I also tried this workflow using the local administrator account on a workstation, and the same thing happens both with and without XP Simple File Sharing enabled. Any idea why I'm getting these results, or what I should have done differently?

    Read the article

  • Solaris 10: How to image a machine?

    - by nonot1
    I've got a Solaris 10 workstation that I'd like to create a full image backup from. The machine has 2 drives, one UFS for system root, and 1 ZFS for data storage. I intend to add a third HD to keep the backup images of both primary drives (including any zfs snapshots). The purpose is not disaster recovery, but rather to allow me to easily blow away a series of application installation/configuration changes I intend to try. What's the best way to do this? I'm not too familiar with Solaris, but have some basic Linux knowledge. I looked at CloneZilla, but it does not support Solaris. I'm OK with just a dd | gzip > image style solution, but I'd need some way to first zero-out the non-used blocks on the primary drives to aid gzip. They are are much larger than my 3rd drive, but hardly have any real data. Update to clarify: I specifically want to avoid using any file-system snapshot functionality, because part of the app configuration changes involve/depend slightly on existing and new snapshots. Ideally the full collection of snapshots should be part of the backup. Virtualization not an option, because the goal is to do performance evaluation on a very specific HW configuration. For the same reason, the spurious "back up" snapshots could skew performance data. Thank you

    Read the article

  • Can enabling a RAID controller's writeback cache harm overall performance?

    - by Nathan O'Sullivan
    I have an 8 drive RAID 10 setup connected to an Adaptec 5805Z, running Centos 5.5 and deadline scheduler. A basic dd read test shows 400mb/sec, and a basic dd write test shows about the same. When I run the two simultaneously, I see the read speed drop to ~5mb/sec while the write speed stays at more or less the same 400mb/sec. The output of iostat -x as you would expect, shows that very few read transactions are being executed while the disk is bombarded with writes. If i turn the controller's writeback cache off, I dont see a 50:50 split but I do see a marked improvement, somewhere around 100mb/s reads and 300mb/s writes. I've also found if I lower the nr_requests setting on the drive's queue (somewhere around 8 seems optimal) I can end up with 150mb/sec reads and 150mb/sec writes; ie. a reduction in total throughput but certainly more suitable for my workload. Is this a real phenomenon? Or is my synthetic test too simplistic? The reason this could happen seems clear enough, when the scheduler switches from reads to writes, it can run heaps of write requests because they all just land in the controllers cache but must be carried out at some point. I would guess the actual disk writes are occuring when the scheduler starts trying to perform reads again, resulting in very few read requests being executed. This seems a reasonable explanation, but it also seems like a massive drawback to using writeback cache on an system with non-trivial write loads. I've been searching for discussions around this all afternoon and found nothing. What am I missing?

    Read the article

  • IPv6 working fine, IPv4 throws OpenSSL error

    - by jippie
    I am building a webserver ( http://blog.linformatronics.nl/ ), which functions just fine on both IPv4 and IPv6 and when using a non-SSL connection. However when I connect to it through https, IPv6 works as expected, but an IPv4 connection throws a client side error. Server side logs are empty for the IPv4/https connection. Summarized in a table: | http | https -----+-------+------------------------------------------------------- IPv4 | works | OpenSSL error, failed. No server side logging. -----+-------+------------------------------------------------------- IPv6 | works | self signed certificate warning, but works as expected Apparently the SSL tunnel isn't even set up, which accounts for the Apache logs being empty. But why does it work fine for IPv6 and fail for IPv4? My question is why is this OpenSSL error being thrown and how can I solve it? Below is some extra information about the setup. IPv6 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -6 https://blog.linformatronics.nl --2012-11-03 15:46:48-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 2001:980:1b7f:1:a00:27ff:fea6:a2e7 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|2001:980:1b7f:1:a00:27ff:fea6:a2e7|:443... connected. WARNING: cannot verify blog.linformatronics.nl's certificate, issued by `/CN=localhost': Self-signed certificate encountered. WARNING: certificate common name `localhost' doesn't match requested host name `blog.linformatronics.nl'. HTTP request sent, awaiting response... 200 OK Length: 4556 (4.4K) [text/html] Saving to: `/dev/null' 100%[=======================================================================>] 4,556 --.-K/s in 0s 2012-11-03 15:46:49 (62.5 MB/s) - `/dev/null' saved [4556/4556] IPv4 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -4 https://blog.linformatronics.nl --2012-11-03 15:47:28-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 82.95.251.247 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|82.95.251.247|:443... connected. OpenSSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol Unable to establish SSL connection. Notes I am on Ubuntu Server 12.04.1 LTS

    Read the article

  • DNS issue on Fedora 12? wget wordpress.org fails where wget www.google.com works

    - by Tom Auger
    I'm administering a Fedora 12 box, but am quite new to networking specifics. Recently one of our WordPress apps hosted on our server has stopped being able to perform its auto-update or auto-download of plugins. Investigating further, I have tried the following: $ wget wordpress.org --2010-12-17 11:26:50-- http://wordpress.org/ Resolving wordpress.org... failed: Temporary failure in name resolution. wget: unable to resolve host address âwordpress.orgâ Whereas: $ wget www.google.com --2010-12-17 11:27:26-- http://www.google.com/ Resolving www.google.com... 74.125.226.82, 74.125.226.84, 74.125.226.80, ... Connecting to www.google.com|74.125.226.82|:80... connected. HTTP request sent, awaiting response... 302 Found Location: http://www.google.ca/ [following] --2010-12-17 11:27:26-- http://www.google.ca/ Resolving www.google.ca... 173.194.32.104 Connecting to www.google.ca|173.194.32.104|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: âindex.html.4â [ <=> ] 9,079 --.-K/s in 0.02s 2010-12-17 11:27:26 (462 KB/s) - âindex.html.4â Interestingly: $ ping wordpress.org PING wordpress.org (72.233.56.138) 56(84) bytes of data. 64 bytes from wordpress.org (72.233.56.138): icmp_seq=1 ttl=50 time=81.5 ms 64 bytes from wordpress.org (72.233.56.138): icmp_seq=2 ttl=50 time=67.3 ms ^C --- wordpress.org ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1783ms rtt min/avg/max/mdev = 67.361/74.448/81.536/7.092 ms and $ nslookup wordpress.org Server: 192.168.2.1 Address: 192.168.2.1#53 Non-authoritative answer: Name: wordpress.org Address: 72.233.56.138 Name: wordpress.org Address: 72.233.56.139 nscd has been stopped and flushed. iptables appear to be clean. At this point I have exhausted my limited abilities to diagnose the issue. Can anyone suggest a resolution path?

    Read the article

  • How to move your Windows User Profile to another drive in Windows 8

    - by Mark
    I like to have my user folder on a different drive (D:) than my OS is (C:). Reading the following post I decided to give it a try. All went quite well, untill I found out that my Windows 8 Apps won't execute anymore (other than that I didn't noticed any problems). My apps do work, while using an account that isn't moved. In the eventviewer I've found error messages like these: App <Microsoft.MicrosoftSkyDrive> crashed with an unhandled Javascript exception. App details are as follows: Display Name:<SkyDrive>, AppUserModelId: <microsoft.microsoftskydrive_8wekyb3d8bbwe!Microsoft.MicrosoftSkyDrive> Package Identity:<microsoft.microsoftskydrive_16.4.4204.712_x64__8wekyb3d8bbwe> PID:<4452>. The details of the JavaScript exception are as follows Exception Name:<WinRT error>, Description:<Loading the state store failed. > , HTML Document Path:</modernskydrive/product/skydrive/App.html>, Source File Name:<ms-appx://microsoft.microsoftskydrive/jx/jx.js>, Source Line Number:<1>, Source Column Number:<27246>, and Stack Trace: ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:27246 localSettings() ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:51544 _initSettings() ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:54710 getApplicationStatus(boolean) ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:48180 init(object) ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:45583 Application(number, boolean) ms-appx://microsoft.microsoftskydrive/modernskydrive/product/skydrive/App.html:216:13 Anonymous function(object) Using ProcMon, I see a lot of access denied messages, like these: Date & Time: 12-9-2012 9:32:20 Event Class: File System Operation: CreateFile Result: ACCESS DENIED Path: D:\Users\John\AppData\Local\Packages\microsoft.microsoftskydrive_8wekyb3d8bbwe\Settings\settings.dat TID: 2520 Duration: 0.0000149 Desired Access: Read Data/List Directory, Write Data/Add File, Read Control Disposition: OpenIf Options: Sequential Access, Synchronous IO Non-Alert, No Compression Attributes: N ShareMode: None AllocationSize: 0 Any idea how to solve this? I noticed that the app folders e.g.: D:\Users\john\AppData\Local\Packages\microsoft.microsoftskydrive_8wekyb3d8bbwe had a different owner than the old profile folder had. Old profile folder had john as owner where my new profile folder had the Administrators group as owner. Changing this didn't help unfortunately.

    Read the article

  • Barriers to IPv6 deployment: addressing

    - by sysadmin1138
    There are several things that are keeping IPv6 deployment from being a topic of active discussion here at my work. There are the usual technical issues, but one non-technical one appears to be a major stumbling block on the path to actually getting a deployment project going. Addresses, memorizing of. Specifically, IPv4 addresses are comprehensible, and IPv6 addresses just look like a big long string of hex. The human mind has real trouble memorizing lists of more than 7-8 items, and an IPv4 address (192.168.231.148) has four items in it which makes it easy for us to memorize. A fully populated IPv6 address has not only 8 sections, but each section has 4 hex digits in it. IPv6 addresses were not designed for memorization. To the technician who knows that the DNS server is at 192.168.42.42 (or more likely "42.42", since the company prefix is likely memorized), the idea of memorizing an IPv6 address fills them with dread. Which in turn makes them much less enthusiastic about participating in an IPv6 deployment project. Because of how our network works we're not fully dynamic in terms of v4 addressing. We have several to many subnets that are entirely statically assigned for a variety of reasons, chief among them being that the overhead of static DHCP assignments is perceived as being too great. Also, some devices still aren't smart enough to pull DNS addresses out of DHCP while also having a static assignment, and therefore require manually configured DNS settings. Therefore, some v6 address memorization will have to be done. We're not under any mandate to get v6 out the door, so we don't have pressure from the top. However, it is time to start prepping our infrastructure to handle IPv6 even if we don't convert wholesale. For those of you who have been in IPv6-land for a while, what short-cut methods do you use to discuss or keep track of subnets and specific/critical IP addresses? If I can help reduce some of the dread surrounding IPv6 we might get the project going.

    Read the article

  • Five stars of open data - example and review

    - by Joe
    (there may be a more suited SE site for this question so feel free to shift) I have some data I'd like to make open to the public - It's synatesis of some related data retrived from freedom of infomation requests over the last year. The data itself is at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.csv or for fans of Excel, at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.xlsx . It's no more than a table with about five columns. I'd like to make this properly open data, so I was looking at the 5 star deployment scheme for Open Data. Much of which is fine but I'm confused towards the end and I could do with an explenation from people who know the answers. So to get achieve the star levels I need: "make your stuff available on the Web (whatever format) under an open license" trival - all I have to do is put the notes up on the page that will give the provance of the data. "make it available as structured data (e.g., Excel instead of image scan of a table)"… done… "use non-proprietary formats (e.g., CSV instead of Excel)" - done… "use URIs to identify things, so that people can point at your stuff" - this is where I start to get a bit hazy - does this mean there should be an URI for every line in the table? "link your data to other data to provide context" - this isn't massively clear to me - does this mean to give the provence of the data? One column of the data I've put out is a link to where the data came from - is that the sort of thing we're looking at? Any and all information and answers welcome… EDIT - or if anyone wants to recommend a place SE or other place to ask the question - that would be cool...

    Read the article

  • Postfix cannot deliver mail to Cyrus mailbox on Ubuntu 11.10 server

    - by user105804
    I have installed and configured Postfix and Cyrus IMAP server with webcyradm according to this document - http://www.delouw.ch/linux/Postfix-Cyrus-Web-cyradm-HOWTO/html/index.html . I can access webcyradm interface, I can create new domains and new users, and I can login via IMAP after creating the user account. However, Postfix fails to deliver mail to cyrus mailboxes. Mail log contains errors shown below. Installing any IMAP server other than cyrus is not an option because it is needed by the web application. Please advise me how to make Postfix deliver email to cyrus mailboxes. The solution should not necessary include web-cyradm, but there should be a web interface for managing mail domains and mailboxes as user-friendly as possible. Dec 30 22:46:17 acer-tower cyrus/lmtpunix[4865]: accepted connection Dec 30 22:46:17 acer-tower cyrus/lmtpunix[4865]: lmtp connection preauth'd as postman Dec 30 22:46:17 acer-tower postfix/cleanup[4868]: 065D5240035: message-id=<[email protected]> Dec 30 22:46:17 acer-tower cyrus/lmtpunix[4865]: verify_user(user.imap0001) failed: Mailbox does not exist Dec 30 22:46:17 acer-tower postfix/bounce[4867]: 6C6CA24185C: sender non-delivery notification: 065D5240035 Dec 30 22:46:17 acer-tower postfix/qmgr[4833]: 065D5240035: from=<>, size=3372, nrcpt=1 (queue active) Dec 30 22:46:17 acer-tower postfix/qmgr[4833]: 6C6CA24185C: removed Dec 30 22:46:17 acer-tower postfix/lmtp[4866]: 53421240372: to=<[email protected]>, orig_to=<[email protected]>, relay=home.webshop-software.ch[/tmp/lmtp], delay=165, delays=165/0.02/0.17/0.09, dsn=5.1.1, status=bounced (host home.webshop-software.ch[/tmp/lmtp] said: 550-Mailbox unknown. Either there is no mailbox associated with this 550-name or you do not have authorization to see it. 550 5.1.1 User unknown (in reply to RCPT TO command))

    Read the article

  • What steps to take when CPAN installation fails?

    - by pythonic metaphor
    I have used CPAN to install perl modules on quite a few occasions, but I've been lucky enough to just have it work. Unfortunately, I was trying to install Thread::Pool today and one of the required dependencies, Thread::Converyor::Monitored failed the test: Test Summary Report ------------------- t/Conveyor-Monitored02.t (Wstat: 65280 Tests: 89 Failed: 0) Non-zero exit status: 255 Parse errors: Tests out of sequence. Found (2) but expected (4) Tests out of sequence. Found (4) but expected (5) Tests out of sequence. Found (5) but expected (6) Tests out of sequence. Found (3) but expected (7) Tests out of sequence. Found (6) but expected (8) Displayed the first 5 of 86 TAP syntax errors. Re-run prove with the -p option to see them all. Files=3, Tests=258, 6 wallclock secs ( 0.07 usr 0.03 sys + 4.04 cusr 1.25 csys = 5.39 CPU) Result: FAIL Failed 1/3 test programs. 0/258 subtests failed. make: *** [test_dynamic] Error 255 ELIZABETH/Thread-Conveyor-Monitored-0.12.tar.gz /usr/bin/make test -- NOT OK //hint// to see the cpan-testers results for installing this module, try: reports ELIZABETH/Thread-Conveyor-Monitored-0.12.tar.gz Running make install make test had returned bad status, won't install without force Failed during this command: ELIZABETH/Thread-Conveyor-Monitored-0.12.tar.gz: make_test NO What steps do you take to start seeing why an installation failed? I'm not even sure how to begin tracking down what's wrong.

    Read the article

  • How to diagnose remote assistance problem

    - by cantabilesoftware
    I have a long standing issue with remote assistance between a home and work PC. My wife and I both use MSN messenger and I used to be able to control her PC at home via MSN Remote Assistance. Some time ago however this stopped working and I don't know why. We're both running the latest versions of MSN Live Messenger and I've checked the appropriate firewall ports are open, but it still doesn't work and MSN just says something useless like "The person isn't responding". Any suggestions for how can I diagnose this? More info: I just tried direct Remote Desktop between work PC and home PC and it works fine - so I presume all the appropriate ports are open. Just Remote Assistance doesn't work. I'd like to get RA working so I can demonstrate how to do things remotely. With Remote Desktop the person at the other end gets booted off and can't see. With Remote Assistance they can follow along step by step. Some comments below suggest using other solutions, which is fine and do work, but there must be a way to diagnose RA and get it working. Experimenting with this some more, the notebook that I was using at work today that refused to connect works fine for remote assistance when I bring it home. So I guess this must be a problem with our network configuration at work. I've checked that 3389 is open on firewall on office router and remote desktop works both ways.... just not remote assistance. I've read that remote assitance won't work if client and server are both behind Non-UPnP/NAT routers. If one has UPnP it's supposed to work. Office router doesn't have UPnP enabled but my home one does. I've also scoured the event logs on both ends, nothing noteworthy - unless I'm looking in the wrong spot). Note (copied from comment): I've just tried ShowMyPC which is based on VNC and it works, but I'd still like to figure out what's wrong with RA - it's just bugging me. The question is only about Remote Assistance, no need to propose solutions based on other programs.[/edit by Gnoupi]

    Read the article

  • Does this exist: a standardized way of documenting a file-system structure

    - by eegg
    At work, I'm in charge of maintaining the organization of a whole lot of varied data on a standard file-system. Part of this is coming up with sensible classification (by similarity, need, read/write access, etc), but the bigger part is actually documenting it: what documents/files/media should go where, what should not be in this directory, "for something slightly different, see ../../other-dir", etc. At the moment, I've documented this using a plaintext file filing.txt in every directory I want to document. If someone is unsure what's meant to be in any directory, they read that file. This works alright, but it seems odd that I have this primitive custom solution to a problem that any maintainer of a non-trivial directory structure must experience. Every company I've known of, for example, has some kind of shared file-system where agreed terminology for categorization is important. In my experience, people just have to learn what's what by trial-and-error and experimentation. So allow me to propose a better solution, and hopefully you can tell me if it exists. Any directory on any filesystem can have a hidden plaintext file named .filing. Its contents are descriptive human language. It uses some markup like Markdown, with little more than bold, italic, and (relative) hyperlinks to other directories. Now a suitably-enabled file browser will check for a file named .filing whenever it displays a directory. If it exists, its contents are parsed and displayed in an unobtrusive pane near the directory-path widget. Any links therein can be clicked, and the user will be taken to the target directory of that link. I think that the effort of implementing such a standard would pay back many times over in usability gains. We would have, say, plugins for Nautilus, Konqueror, etc.. It could be used to display directory information in the standard file lists served by webservers. And so on. So, question: does such a thing exist? If not, why not? Do people think it's a worthwhile idea?

    Read the article

  • My home box as my own host?

    - by Majid
    Hi all, I have a 512 kb/s DSL service at home. I do not have a static IP but I can get one if I pay some extra to my ISP. Now, if I get the static IP, can I make my home box act as my internet host? What else do I need? Thanks P.S. I know that if at all possible, the site I make available this way might be slow, that is alright, my question is if it is possible at all. Edit: I need this for very small traffic. I am a php developer and for my projects I am often asked to provide a demo. I currently use a free hosting for this purpose but it is down most of the time and support is non-existent. So I thought to set-up my home computer as my test server. With this please note that: I will only occasionally have 'visitors' and that will be one or possibly two visitors at any time. These demos, are to showcase functionality, so no big images are served and page views will normally generate under 100KB of traffic.

    Read the article

  • Linux: can't downgrade ATI graphic driver

    - by java.is.for.desktop
    I had the old 10.9 driver of ATI, it worked fine. Sadly, I decided to upgrade it to the new 10.12. After that, HDMI stopped working. So, I want to downgrade back to 10.9. I did everything by the book. But it fails: As root, I do: cd /usr/share/ati sh ./fglrx-uninstall.sh Yes, the uninstall process tells me that everything is fine uninstalled. And after doing reboot, I have the default opensource ATI driver which comes with X11 (or the kernel?). The ATI control panel shortcut points to nowhere. So, the driver seems uninstalled. Now I install the older (10.9) proprietary ATI driver. It also finishes successfully. But after reboot, the ATI control panel tells me that I still have 10.12. And it seems to be true, because HDMI doesn't work. So, how can I completely purge the new non-working driver and downgrade to the old one?

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >