Search Results

Search found 29432 results on 1178 pages for 'mite fine dailes'.

Page 123/1178 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Scheduled tasks fail to start unless I'm logged in to the server

    - by Chuck
    Tasks need to open a CMD window and pass net use commands, then do a DIR command, pipping the output to a file on the server. Log in as either me (Sysadmin) or with one of the system accounts and task will only run if I'm physically logged into the server. Run as batch file is set in security properties for both users (me and service account), security is granted to all directories, etc. It almost acts like a scheduled task, since it is not physically connected to a display can't create a CMD window and pass the WinID so the command can be sent. I'm guessing. Anyone know of a document that explains how the server handles initiation of a window if done via scheduled task and no attached user is associated with the task? If I log onto the box and run the scheduled tasks they run fine, but produce no errors or event log entries and then just show that it ran successfully and sets the next run time. Have tried both with the run if logged in checkbox on and off and makes no difference. Other tasks work fine, except that they are acting on local drives with no display writing or updating taking place, so I'm guessing the system either can't instantiate a window if no display is connected to a logged on user, or it can't establish a point if it is trying to create a virtual screen. You'd think it is just creating a memory map and then mapping it to a device to display, but that doesn't seem to be the case, but I can find no documentation on how the system handles a scheduled task and how to invoke a fake or virtual screen that it could write to so it appears that a user was connected. Thanks This is driving me nuts and I've tried everything I can think of as well as our network boys ideas and nothing seems to work.

    Read the article

  • 500 Internal Server Error after changing .NET Framework Version to 4.0 in IIS7

    - by René
    I just changed my .NET Framework Version of the Application Pools in IIS7 Manager, following these instructions. Now when I try to re-upload my ASP.Net page, it shows me a 500 - Internal server error. I have tried uploading it in .net 2.0(X86, X64, AnyCPU), and 4.0(X86, X64, AnyCPU), and everything gives the same error. This is all the details the error gives me: "There is a problem with the resource you are looking for, and it cannot be displayed." When keeping the .NET version on 2.0 on the server, it works just fine. Also, when uploading "index.htm", it works fine as well, it just shows the HTML page. This is on Windows Server 2008 R2, by the way. EDIT: I have finally found out how to get the error details. Here they are: "Handler "PageHandlerFactory-Integrated" has a bad module "ManagedPipelineHandler" in its module list." "Most likely causes: •Managed handler is used; however, ASP.NET is not installed or is not installed completely. •There is a typographical error in the configuration for the handler module list. Things you can try: •Install ASP.NET if you are using managed handler. •Ensure that the handler module's name is specified correctly. Module names are case-sensitive and use the format modules="StaticFileModule,DefaultDocumentModule,DirectoryListingModule"." I am sure that I have installed ASP.NET completely. Please help me, -René

    Read the article

  • IPCop server slows down download speed

    - by noocyte
    I have an IPCop server running at home, been doing just fine for ~5 months, but last week I suddenly started getting time-outs and slow downloads from the 'net. I first thought that this was my ISP acting up, then I thought it might be one of my 3 switches or some of my cabling. In due order I've tested everything above and found them all to be working as they should. The only factor remaining is my IPCop server. Facts: I've got a 15/15 Mbit line (fiber) and I get ~15 Mbit upload, but only 0.5 Mbit download with the IPCop box as router (ISP router set in bridge mode). If I connect without the IPCop box (using the ISP router) I get ~12 Mbit upload and ~15 Mbit download. The load on the IPCop box appears to be light and it used to handle this traffic just fine 2 weeks ago. The memory usage is ~60%, I tried to restart it and test again, the memory fell to ~50% then (5 months of uptime). I'm thinking that one of my nics are busted, but I'm sort of perplexed that this could be the outcome; slow download but full speed upload. Anybody ever seen that happening before? Could it just be one of the nics that needs to be replaced? Will try that as soon as I can get my hands on a couple of new ones.

    Read the article

  • Windows 2008 RemoteAPP client disconnects within a matter of minutes

    - by Jeroen Wilke
    I'm having an odd problem with Windows 2008 TS, and remote applications specifically. The situation is as follows: TS idle timeout is disabled via GPO TS terminating disconnected sessions after 1hr (via GPO) My users can log on to the Terminal server, and get a full desktop, OR via rdp files that give access to a few remote applications. When a user connects to a full desktop, everything is fine and dandy, they will remain logged on indefinately, and when they disconnect the session is terminated after an hour. however, when a user connects using a remote application link, the client seems to disconnect after only a few minutes of inactivity, when you click the window, the session reconnects. EventID's on TS server: 4779: This event is generated when a user disconnects from an existing Terminal Services session, or when a user switches away from an existing destop using Fast User Switching. 4778 : This event is generated when a user reconnects to an existing Terminal Services session, or when a user switches to an existing desktop using Fast User Switching users are connecting directly to 3389, not using a TS-gateway at the moment. This behavior is consistent on different clients that we have, Full desktop is fine, RemoteAPP constantly disconnects. The .rdp file used doesn't list any interesting parameters, aside from what application to launch, and where to find it. Can someone explain to me how there can be a difference in behaviour between full desktop, and remoteapp ? since essentially they use the exact same client ? Regards Jeroen

    Read the article

  • Why do weekly tasks created via PowerShell using a different user fail with error 0x41306

    - by Danny Tuppeny
    We have some scripts that create scheduled jobs using PowerShell as part of our application. When testing them recently, I noticed that some of them always failed immediately, and no output is ever produced (they don't even appear in the Get-Job list). After many days of tweaking, we've managed to isolate it to any jobs that are set to run weekly. Below is a script that creates two jobs that do exactly the same thing. When we run this on our domain, and provide credentials of a domain user, then force both jobs to run in the Task Scheduler GUI (right-click - Run), the daily one runs fine (0x0 result) and the weekly one fails (0x41306). Note: If I don't provide the -Credential param, both jobs work fine. The jobs only fail if the task is both weekly, and running as this domain user. I can't find information on why this is happening, nor think of any reason it would behave differently for weekly jobs. The "History£ tab in the Task Scheduler has almost no useful information, just "Task stopping due to user request" and "Task terminated", both of which have no useful info: Task Scheduler terminated "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" instance of the "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" task. Task Scheduler stopped instance "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" of task "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" as request by user "MyDomain\SomeUser" . What's up with this? Why do weekly tasks run differently, and how can I diganose this issue? This is PowerShell v3 on Windows Server 2008 R2. I've been unable to reproduce this locally, but I don't have a user set up in the same way as the one in our production domain (I'm working on this, but I wanted to post this ASAP in the hope someone knows what's happening!). Import-Module PSScheduledJob $Action = { "Executing job!" } $cred = Get-Credential "MyDomain\SomeUser" # Remove previous versions (to allow re-running this script) Get-ScheduledJob Test1 | Unregister-ScheduledJob Get-ScheduledJob Test2 | Unregister-ScheduledJob # Create two identical jobs, with different triggers Register-ScheduledJob "Test1" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Weekly -At 1:25am -DaysOfWeek Sunday) Register-ScheduledJob "Test2" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Daily -At 1:25am)

    Read the article

  • Apache, Tomcat 5 and problem with HTTP basic auth

    - by Juha Syrjälä
    I have setup a Tomcat with a webapp that uses http basic auth in some of its URLs. There is a Apache server in front of the Tomcat. I have setup Apache as a proxy like this (all traffic should go directly to tomcat): /etc/httpd/conf.d/proxy_ajp.conf: LoadModule proxy_ajp_module modules/mod_proxy_ajp.so ProxyPass / ajp://localhost:8009/ ProxyPassReverse / ajp://localhost:8009/ There is a webapp installed to root of Tomcat (ROOT.war), so I should be able to use http://localhost/ to access my webapp. But it is not working with http basic auth. The problem is that everything works until I try to access URL that are protected by the HTTP basic auth. URLs without authentication work just fine. When accessing this url via apache I am getting an error message from Apache. If I access the same URL directly from tomcat, everything works just fine. I am getting this to Apache error log: [Wed Sep 01 21:34:01 2010] [error] proxy: dialog to [::1]:8009 (localhost) failed access log looks like this: ::1 - - [01/Sep/2010:21:34:01 +0300] "GET /protected_path/ HTTP/1.0" 503 360 "-" "w3m/0.5.2" I am using: Fedora release 13 (Goddard) httpd-2.2.16-1.fc13.x86_64 tomcat5-5.5.27-7.4.fc12.noarch The basic auth is implemented in the webapp (not in Apache or Tomcat). The webapp is actually implemented in Scala/Lift, but that shouldn't matter. The auth works if I access the tomcat directly. Error message that I am getting from Apache. It is curious that the title is Unauthorized and not Internal error: Unauthorized The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. Apache/2.2.16 (Fedora) Server at my.server.name.com Port 80 It could be that Apache is seeing a some thing else than 200 OK response and thinks that it is an error when it actually should pass the received 401 Unauthorized response directly to browser. If this is the problem, how to fix it?

    Read the article

  • Internet Explorer / Windows 7 does not want to show HTML file from local network drive

    - by Jaanus
    Setup: I have Windows 7 running inside VirtualBox on Mac OS X host. I have a shared drive with some HTML files, that I am mounting as a local drive W: in Windows, from the VirtualBox server \VBOXSVR. I want to look at them with a browser in Windows. Chrome in Windows 7 opens and shows those HTML files just fine (file:///W:/welcome.html). But Internet Explorer does not, and shows this error instead of the files: Internet Explorer cannot display the web page What you can try: [button Diagnose Connection Problems] More information This problem can be caused by a variety of issues, including: Internet connectivity has been lost. The website is temporarily unavailable. The Domain Name Server (DNS) is not reachable. The Domain Name Server (DNS) does not have a listing for the website's domain. If this is an HTTPS (secure) address, click Tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. For the internet zone in the status bar, it shows: Internet | Protected Mode: On IE settings are a mystery to me, and I could possibly get it to work by tweaking IE settings, but I don't know which ones. How do I make IE show the same files that Chrome is happy to show? (Chrome showing them means that the files themselves are fine, there is something about the setup that just makes IE be a diva.)

    Read the article

  • Virutal Machine loses network connectivity on Hyper V Cluster

    - by Chris W
    We're running a number of VMs on a 6 node failover cluster of blades using Hyper V. We have an intermittent issue (every few days at different times - not a fixed frequency) of VMs losing network connectivity. Console access to the VM suggests all is fine and the underlying blade has normal connectivity. To resolve the problem we either have to re-start the VM or, more usually, we do a live migration to another blade which fires up connectivity and we then migrate it back to the original blade. I've had 3 instances of this happen with a specific VM running on a particular blade however it has happened once with a different VM running on a different blade. All VMs and blades have the same basic setup and are running Windows 2008 R2. Any ideas where I should be looking to diagnose the possible causes of this problem as the event logs provide no help? Edit: I've checked that each blade is running the latest NIC drivers and all seem to be fine. Something that is confusing me - a failover or restart of the VM resolves the issue. Whilst I need to work out the underlying issue that is causing the NICs to hang I'm also concerned that the VM didn't failover to another node which would have solved the outage for me. Is there a way to configure the cluster so that it can tell that the VM guest has lost connectivity and fail it over? As things stand the cluster is assuming that the VM is running happily as I presume Hyper V says everything is great even though there is a problem.

    Read the article

  • Network connection to Firebird 2.1 became slow after upgrading to Ubuntu 10.04

    - by lyle
    We've got a setup that we're using for different clients : a program connecting to a Firebird server on a local network. So far we mostly used 32bit processors running Ubuntu LTS (recently upgraded to 10.04). Now we introduced servers running on 64bit processors, running Ubuntu 10.04 64bit. Suddenly some queries run slower than they used to. In short: running the query locally works fine on both 64bit and 32bit servers, but when running the same queries over the network the 64bit server is suddenly much slower. We did a few checks with both local and remote connections to both 64bit and 32bit servers, using identical databases and identical queries, running in Flamerobin. Running the query locally takes a negligible amount of time: 0.008s on the 64bit server, 0.014s on the 32bit servers. So the servers themselves are running fine. Running the queries over the network, the 64bit server suddenly needs up to 0.160s to respond, while the 32bit server responds in 0.055s. So the older servers are twice as fast over the network, in spite of the newer servers being twice as fast if run locally. Apart from that the setup is identical. All servers are running the same installation of Ubuntu 10.04, same version of Firebird and so on, the only difference is that some are 64 and some 32bit. Any idea?? I tried to google it, but I couldn't find any complains that Firebird 64bit is slower than Firebird 32bit, except that the Firebird 2.1 change log mentions that there's a new network API which is twice as fast, as soon as the drivers are updated to use it. So I could imagine that the 64bit driver is still using the old API, but that's a bit of a stretch, I guess. Thanx in advance for any replies! :)

    Read the article

  • How to update the hard disk device drivers for a ghosted hard drive image so it can run on different hardware: Ultra ATA > SATA

    - by rism
    I've ghosted a Winxp machine from one laptop with Ultra ATA drive, and would like to set it up on another laptop as a multiboot option on another hard driver with a SATA drive. I can install the partition fine but if i make it active and try to boot it it blue screens. The blue screen is so fast i cant even read it, other than to make out it's saying "something", im picking probably hard drive as it goes through POST fine. So basically i would like to boot into my Win7 OS, and then somehow manipulate the XP partition to use updated drivers for the new hard drive/laptop so that i can then at least boot into the XP OS on the new machine and update all the other drivers in safe mode or whatever to get it to run. I assume someone is going to tell me to just do a fresh install, but that kinda defeats the purpose of ghosting at this point. There is a significant amount of personalisation, development setup on the XP machine that i would like to just transfer as is. As it stands ive invested minmal time in getting it to run, just a ghost and recovery and then a blue screen boot or two, so its still well worth it to me, time wise to try this way. Thanks.

    Read the article

  • One host on a network can't connect to one other host

    - by Max Williams
    I'm on a local network with a few other people. On of the hosts is a virtual machine running in virtualbox on a mac, which has the ip address 192.168.0.35 (the VM that is, not the mac host). Everyone except one guy can connect (ie ping, ssh etc) to that machine. When that one guy tries to ping it he gets Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 which i understand is just how certain mac os's report an unreachable connection. He can ping all the other hosts on the network, ie our computers, and we can all ping the VM fine and connect to it with no problems etc. His ip is 192.168.0.17. I ssh'd onto his machine (as a new user 'anon') and saw the same problems. I can ssh onto the 192.168.0.35 VM as well. From there, i can ping other users, but when i ping the problem guy, it's unreachable that way round as well. He restarted his mac, and was fine for a while. Then, just stopped working again. He's got a different IP to before. Any ideas, anyone? Don't know enough about this stuff to even diagnose the problem. thanks, max

    Read the article

  • How do I Setup Multiple Sites in HostGator Shared Hosting?

    - by cillosis
    I recently decided to consolidate all of my random projects into a single hosting account as it was starting to get very expensive to run each on an individual hosting plan. I purchased the HostGator Baby plan which allows hosting of multiple domains. You have to set it up with a root domain name which is fine (I used my portfolio domain name). As far as file structure, I wanted a folder for each site in /public_html so the structure looks like this: - public_html/ - myportfolio.com/ - ... my files ... - anothersite.com/ - ... my files ... - thirdsite.com/ - ... my files ... I setup add-on domains and pointed them to their respective folders which works fine. My problem is the root domain ex. myportfolio.com expects it's files to be contained at the root of /public_html rather than within it's folder I created. I setup a redirect to point requests for myportfolio.com to myportfolio.com/myportfolio.com/ which works initially except (at least in my WordPress installation) it still references it's root folder as public_html. TL;DR; What is the best way to go about setting up multiple site hosting in a shared hosting environment (i.e. I can't setup vhosts). Does anybody know of any tutorials or videos that walk through this more clearly? Thanks.

    Read the article

  • Task Scheduler not able to execute .vbs successfully

    - by Django Reinhardt
    Hi there, got this weird problem, which will hopefully have an obvious solution for some enlightened soul: We have several daily tasks we run via a .vbs script on our server (through the Task Scheduler), and for months it has been fine, but recently we've hit a problem. The .vbs script stopped successfully executing... but oddly it worked fine when ran manually! The error given in these circumstances was always "Timeout". We thought we try a little creative thinking, and run the .vbs another way: Via a .bat file. Again we hit weird issues, but with a little more debugging information, this time around. The .bat file is nothing more than... CScript "C:\location\script.vbs" > Log.txt But the Task Scheduler fails with the following error: 0x1: An incorrect function was called or an unknown function was called. The log.txt file says: CScript Error: Initialization of the Windows Script Host failed. (Not enough storage is available to process this command. ) But get this: The .bat file executes perfectly (vbs script and all) if it's executed with a double click! There's only a problem when it's run by Task Scheduler. What the hell? We're running Windows Server 2008 R2 (x64) and yes, the Task Sheduler's results are the same whether the user is logged in or not. Also, the user that can run the scripts successfully manually, is also the same user that runs the scripts in Task Scheduler. Thanks for any help for this weird problem!

    Read the article

  • nginx+mysql5 loadtesting configuration strangeness

    - by genseric
    i am trying to setup a new server running on debian6 and trying to make it work smooth under load. i ve used a wordpress site as a test object, and tried the configurations on http://blitz.io. when i increase the mysql max_connections from 50 to 200 lots of timeouts start to occur. but on 50 , no timeouts and pretty well response times. nginx configuration is fine , i tuned the config so i dont see errors. so i presume it's related to the other configuration options of my.cnf . i read some about options but still cant find what max_connections problem is all about. btw, the server has 16gb of ram and a fine i7 cpu. here is the current my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] wait_timeout=60 connect_timeout=10 interactive_timeout=120 user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking bind-address = 127.0.0.1 key_buffer = 384M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 20 myisam-recover = BACKUP max_connections = 50 table_cache = 1024 thread_concurrency = 8 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M thanks in advance. i asked this question on SO but it's closed as off topic so i believe this is a SF question.

    Read the article

  • Windows Scheduled Startup Task doesn't appear to be fully working but why?

    - by Devtron
    I originally tried to use Group Policy to enforce a startup script to run at startup. My startup script is a .CMD file, which calls 10 .exe files. Using Group Policy I could never get this to work....so I looked into using Scheduled Tasks. And here I am. I have tried two different versions of my script (for syntax purposes). I originally thought my syntax could be bad, so I tried a few approaches. Neither work. My #1 .CMD file approach commands look similar to this: start "this is my title" /D "C:\Somepathhere\myExecutable.exe" "..\..\published\wc_task.wfc" My #2 .CMD file approach commands look similar to this (it invokes a shortcut file): rundll32 shell32.dll,ShellExec_RunDLL "C:\Somepathhere\bin\Virtual Workflow.lnk" ^ Both of these scripts work fine if I manually run them, either by running the .CMD file, or even by manually forcing the Schedule Task MSC console to "Run" this script. Manual process seems to work fine, but automated it does not. My scheduled task is set for startup and uses "highest privileges" to execute as Admin. At the end of my .CMD script, I added a line to write to a text file, just to prove that the script was being run. That command looks like this: echo foo > C:\foo.txt When I reboot my server, and Schedule Tasks kicks in, I never get my ten .EXE files to run, but I do get the C:\foo.txt on my drive. What gives?

    Read the article

  • How to I make my bootcamp partition bootable again?

    - by KJFMusic
    I'm having a similar problem as everyone else in this posting. I have 5 partitions. 3 of which I created for my Mac OS Lion installation, Windows 7 installation and a 3rd for storage. Everything was running fine for quite sometime until recently. My Windows 7 installation has suddenly stopped booting. Instead of a start up screen I get: Windows failed to start. A recent hardware or software change might be the cause. File: \BOOT\BCD Status: 0xc000000d Info: An error occurred while attempting to read the boot configuration data Mac OS Lion starts up fine. I'm unable to mount my "Bootcamp" partition nor the "Storage" partition. On top of that "Storage" has been renamed to "disk0s5". When I installed Windows 7 it didn't recognize the "Storage" partition that was created in Lion so it merged what it thought was free diskspace (I'm assuming the same space that Mac OS recognized as Storage) to the Root Drive of Windows 7 (Bootcamp). Are you able to assist?

    Read the article

  • sequential SSH command execution not working in Ubuntu/Bash

    - by kumar
    My requirement is I will have a set of commands that needs to be executed in a text file. My Shell script has to read each command, execute and store the results in a separate file. Here is the snippet which does the above requirement. while read command do echo 'Command :' $command >> "$OUTPUT_FILE" redirect_pos=`expr index "$command" '>>'` if [ `expr index "$command" '>>'` != 0 ];then redirect_fn "$redirect_pos" "$command"; else $command state=$? if [ $state != 0 ];then echo "command failed." >> "$OUTPUT_FILE" else echo "executed successfully." >> "$OUTPUT_FILE" fi fi echo >> "$OUTPUT_FILE" done < "$INPUT_FILE" Sample Commands.txt will be like this ... tar -rvf /var/tmp/logs.tar -C /var/tmp/ Commands_log.txt gzip /var/tmp/logs.tar rm -f /var/tmp/list.txt This is working fine for commands which needs to be executed in local machine. But When I am trying to execute the following ssh commands only the 1st command getting executed. Here are the some of the ssh commands added in my text file. ssh uname@hostname1 tar -rvf /var/tmp/logs.tar -C /var/tmp/ Commands_log.txt ssh uname@hostname2 gzip /var/tmp/logs.tar ssh .. etc When I am executing this in cli it is working fine. Could anybody help me in this?

    Read the article

  • Monitor connected with WiDi just shows a black screen

    - by Pops
    I have a Dell XPS 18 and want to use an external monitor with it. The monitor has VGA and DVI-D inputs. The XPS 18 has no video output ports, but it does support Intel WiDi. I have a Netgear P2TV2000 WiDi receiver, but it only has composite and HDMI outputs. I'm connecting the receiver to the monitor with an HDMI cable and an HDMI/DVI-D adapter. So, in short, the video path is: Computer → WiDi → WiDi receiver → HDMI cable → HDMI/DVI-D adapter → Monitor After setting all this up, I can't get anything to display on the monitor. At the moment I power the monitor on, I can see my desktop for a brief fraction of a second, and then everything goes black. All of the relevant drivers have been updated to the latest versions. When I use a different WiDi-enabled computer to connect to the same monitor, everything works fine. When I use the original computer and receiver to connect to a TV, everything works fine. It's only when I connect the original computer to the monitor I want to use that the connection fails. What could be going wrong here, and how can I get the video to work consistently?

    Read the article

  • D-LINK 2450U DSL router: Port forwarding forwading to the modem itself, not the specified IP

    - by axk
    I found a similar question but it has no satisfactory answers. I have a D-LINK 2540U DSL router. It has a basic port forwarding(under DNS - Virtual Servers) configuration in the administration panel where you specify: external port range, protocol, internal port range, server IP address and it is supposed to forward that port to that IP address. When I first set it up for a Real VNC connection it worked fine, just as I expected. Then I added a DynDNS configuration entry in the router's 'Dynamic DNS' section and added an additional SSH (22) forwarding rule. The SSH forwarding also worked fine (now with the dynamic hostname, but I suppose it doesn't make any difference as far as SSH is concerned). Then I removed the SSH rule and after that the VNC forwarding stopped working with the VNC client failing to connect (I have tried to connect with telnet and it also failed to connect, so it wasn't a VNC problem). After adding a rule for port 80 it turned out it would forward on port 80 though not to the specified server IP but to the modem itself. At least it is what it looks like, because it gives me the administration panel when I connect to my external IP (both using a browser and plain telnet in which case I can see that it is mini_hhtpd sitting on the port, which is obviously the modem's administration panel). Have anybody encountered a similar problem with port forwarding? I have tried to do a reset through the administration panel and to restore a backup of the settings made before I started playing with port forwarding, but it didn't help. Should I do a 'hard' reset with the button on the modem? Is it any different from the administration panel's reset (Restore default)?

    Read the article

  • Enabling WinRM by Group Policy

    - by SaintNick
    I'm having partial success enabling WinRM through Active Directory GPO's on our Server 2008 R2 environment. I've created a GPO that enables "Allow automatic configuration of listeners" and also enables all the necessary predefined WinRM Firewall rules. This GPO works fine for our webservers. Indeed, this is reflected by the "Server Manager Remote Management" nicely flipping to "enabled" in Server Manager Server Summary. However, the same GPO applied to both our Management servers, which are Domain Controllers, does not give the same result. I see the GPO settings being applied, including the listener as confirmed by C:\Windows\system32>winrm e winrm/config/listener Listener [Source="GPO"] Address = * Transport = HTTP Port = 5985 Hostname Enabled = true URLPrefix = wsman CertificateThumbprint ListeningOn = 10.32.40.210, 10.32.40.211, 10.32.40.212 But in Server Manager, Server Summary, Remote Management remains on "disabled" and indeed when trying to connect to one of these machines Server Manager gives an "Access Denied". Manually enabling WinRM locally via Server Manager "Configure Server Manager Remote Management" on either of these machines works fine. What can be the cause? Can it have something to do with theses machines being DC's and needing extra settings in the GPO? Nick Reid

    Read the article

  • Why can I not access the internet when Windows 7 finds no issue with the ethernet connection and the network can see my device?

    - by WannabeCoder
    So I just moved from a house to an apartment. In the house and the apartment I had Uverse set up - and in both I had my desktop connected via a ~40 foot long cat5 cable. However, upon moving to the apartment I found that my ethernet connection no longer provides internet. This would seem like a mundane problem if not for: The router can see the computer on the network Windows 7 (the desktop's OS) detects no problems with the ethernet connection. Connections over the internet (i.e. browser windows, Pandora, etc.) do not immediately fail. Instead they load for 2 minutes and then finally give up. Devices connected over the Wifi (PS4, Laptop) access the internet just fine While removing the cat5 cable from my house, I accidentally damaged the locking tab but managed to bend it back into the appropriate position. I would suspect that a bad cat5 cable might be to blame if not for the above issues (thought I've heard bad cat5 cables cause the most nonsensical problems) and the fact that I tested the cat5 cable by having it share internet between my laptop (working internet) to my desktop and it functioned just fine and provided the desktop with internet. My ipconfig /all successfully finds a default gateway, DHCP server, and DNS server. What could possibly be causing the problem?

    Read the article

  • Ubuntu 12.04 - Pound Reverse Proxy and Adobe Flex/Flash Auth

    - by James
    First time posting, I have a completely fresh install of ubuntu 12.04 Client as a reverse proxy gateway to our internal network. Our setup is we have one external ip but three domains we would like to point to various webservers on our internal network. It's not so much a load balancing issue or cacheing etc. Merely routing some Client browsers to a port 80 webpage (to adhere to some stricter corporate policies regarding placing port numbers after domain names). I have gone with pound and everything seems to be working fine. Static pages load etc. Everything is good with the exception of a Flash/Flex based WebClient for a Digital Asset Management program. The actual static page loads fine, it is just at the moment of entering credentials, be they correct or incorrect, and hitting login, there is no response whatsoever. Either a rejection or confirmation etc. So the request back to the internal server can't be getting through. I have googled extensively and there might be a solution in a crossdomain.xml file? Documentation isn't very clear. And we are not the authors of the DAM app, and have no control over the code on the Flash/Flex side. Questions: Is there a particular config file/solution for pound that allows Flash/Flex auth information to be forwarded? Is there another reverse proxy program (nginx?)that allows this type of config? Am I looking at this the entire wrong way, should Flash/Flex fundamentally not be allowed to have this access?

    Read the article

  • Firefox and Chrome keeps forcing HTTPS on Rails app using nginx/Passenger

    - by Steve
    I've got a really weird problem here where every time I try to browse my Rails app in non-SSL mode Chrome (v16) and Firefox (v7) keeps forcing my website to be served in HTTPS. My Rails application is deployed on a Ubuntu VPS using Capistrano, nginx, Passenger and a wildcard SSL certificate. I have set these parameters for port 80 in the nginx.conf: passenger_set_cgi_param HTTP_X_FORWARDED_PROTO http; passenger_set_cgi_param HTTPS off; The long version of my nginx.conf can be found here: https://gist.github.com/2eab42666c609b015bff The ssl-redirect.include file contains: rewrite ^/sign_up https://$host$request_uri? permanent ; rewrite ^/login https://$host$request_uri? permanent ; rewrite ^/settings/password https://$host$request_uri? permanent ; It is to make sure those three pages use HTTPS when coming from non-SSL request. My production.rb file contains this line: # Enable HTTP and HTTPS in parallel config.middleware.insert_before Rack::Lock, Rack::SSL, :exclude => proc { |env| env['HTTPS'] != 'on' } I have tried redirecting to HTTP via nginx rewrites, Ruby on Rails redirects and also used Rails view url using HTTP protocol. My application.rb file contains this methods used in a before_filter hook: def force_http if Rails.env.production? if request.ssl? redirect_to :protocol => 'http', :status => :moved_permanently end end end Every time I try to redirect to HTTP non-SSL the browser attempts to redirect it back to HTTPS causing an infinite redirect loop. Safari, however, works just fine. Even when I've disabled serving SSL in nginx the browsers still try to connect to the site using HTTPS. I should also mention that when I pushed my app on to Heroku, the Rails redirect work just fine for all browsers. The reason why I want to use non-SSL is that my homepage contains non-secure dynamic embedded objects and a non-secure CDN and I want to prevent security warnings. I don't know what is causing the browser to keep forcing HTTPS requests.

    Read the article

  • mod_proxy incorrect redirect behaviour

    - by Kevin Loney
    In chrome this configuration causes an infinite redirect loop and in every other browser I have tried a request for https://www.example.com/servlet/foo is resulting in a redirect to https://www.example.com/foo/ instead of https://www.example.com/servlet/foo/ however this only occurs when I do not include a trailing / at the end of the request url (i.e. http://www.flightboard.net/servlet/foo/ works just fine). <VirtualHost *:80> # ... RewriteEngine On RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^/servlet(/.*)?$ RewriteRule ^(.*)$ http://%{HTTP_HOST}$1 [R=301,L] </VirtualHost> <VirtualHost *:443> # ... ProxyPass /servlet/ ajp://localhost:8009/ ProxyPassReverse /servlet/ ajp://localhost:8009/ </VirtualHost> The virtual host on port 443 has no rewrite rules that could possibly causing the problem, the tomcat contexts being referenced do not send any redirects, and if I change the ProxyPass and ProxyPassReverse directives to: ProxyPass / ajp://localhost:8009/ ProxyPassReverse / ajp://localhost:8009/ everything works fine (except for the fact everything from www.example.com is being passed to the proxy which is not the behaviour I want). I'm fairly certain this is a problem with the way I have my proxy settings configured because I did log all the rewrite output coming from apache and it was all correct.

    Read the article

  • Lost partition after restarting

    - by nxhoaf
    I have Window 7 Professional Service pack installed in my Laptop Lenovo Thinkpad t420. After formatting the disk, and install Window 7 (detailed as above), I went to Computer -- Manager -- Storage -- Disk Management to split my 300gb C partition into 2 partition: C (which is 162gb) E (which is 140gb) Is work fine for about 2 days. Today, when I turn on my computer, I'm very suprise that the E partition is disappear. I can surely confirm that I didn't do any stupid thing yesterday. And before I shut down my computer, everything was fine. In general, here is what I did during the last today (from the point that I formatted the disk, and installed Window) Format 300gb hard disk Install window 7 Install eclipse, db2, .... ( I'm a developer) Install some other tools (Open office, Skype...) Install PGP (http://www.symantec.com/encryption) <--- I'm forced to used that due to my company policy Use Computer -- Manager -- Storage -- Disk Management to split my 300gb C partition into 2 partition as described above. It worked quite well for two last days. Until day... Can you please help me to recover my lost partition ? Thank you! For more info, here is my partition info: You can also see the image here

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >