Search Results

Search found 47251 results on 1891 pages for 'web storage'.

Page 870/1891 | < Previous Page | 866 867 868 869 870 871 872 873 874 875 876 877  | Next Page >

  • ssh port forwarding / security risk

    - by jcooper
    Hi there, I want to access a web application running on a web server behind my office firewall from an external machine. We have a bastion host running sshd that is accessible from the Internet. I want to know if this solution is a bad idea: Create an account on the bastion host with shell=/bin/false and no password ('testuser') Create a ssh RSA key on the external machine Add the public RSA key to the testuser's authorized_keys file ssh to the bastion host from the external host using: ssh -N 8888:targethost:80 run my tests from the external host shut down the ssh tunnel I understand that if my RSA private key were compromised then someone could ssh to the bastion host. But are there other reasons this solution is a bad idea? thank you!

    Read the article

  • Do parent website application pools serve child application pools as well?

    - by Mike G
    I am running a .NET web application in its own application pool on IIS7. The parent website is set to run in its own application pool. Today we noticed a huge number of connections going to IIS. I tried to browse a plain ol' .html page in the directory of the web application and it hangs. I then try to browse another plain .html file in the root of the parent website, and it too hangs. In performance monitor, i see there are some 8k connections to the default website and climbing. I cant seem to understand if my application was the problem, or IIS itself. If it was my application, wouldnt the html page in the root of the parent website still be able to be served? edit: Also, if i shut down the app pool to my application, the html page on the root of the parent website is still not able to be displayed.

    Read the article

  • Implications of allowing Windows clients to use NTLMv1?

    - by Boden
    I have a web application that I'd like to authenticate to using pass-through NTLM for SSO. There is a problem, however, in that NTLMv2 apparently will not work in this scenario (without the application storing an identical password hash). I enabled NTLMv1 on one client machine (Vista) using its local group policy: Computer-Windows Settings-Security Settings-Network Security: LAN Manager authentication level. I changed it to Send LM & NTLM - use NTLMv2 session security if negotiated. This worked, and I'm able to login to the web application using NTLM. Now this application would be used by all of my client machines... so I'm wondering what the security risks are if I was push this policy out to all of them (not to the domain controller itself though)?

    Read the article

  • monitor http traffic from non-browser

    - by Ananth
    hi, I want to monitor http request generated out of a exe. Is there any tool that can help me? Actually, an exe would call my asp.net web page to register a user. The exe constructs the request with all the data in it. when the request reaches my web page, I don't see any data. I wan to monitor the Request object and the traffic to find the reality. Any help is appreciated. Thanks. Ananth

    Read the article

  • Groups issue on Ubuntu

    - by grobarTN
    Hello, I am member of couple of groups lets say Master, Student, Web. The problem is that by default whatever I do is first created under student group. I need to set it so it is created with Web group. Folder www/ where I need to write file is already mode 770. But because it picks up my student group it does not allow me to write to that folder. Is there any way to change the group that I create files under. If I execute groups it lists all groups so I am member of correct group I just cant write to the folder. Anyone?

    Read the article

  • Ubuntu VPS - email and webserver

    - by xZero
    I have a VPS based on Ubuntu, it has installed whole LAMP, everything needed for a web server and it works perfectly, but I'm still not able to configure it as a mail server.... I have configured MX record for my domain mail.mydomain.com and this part is OK, I also installed Postfix, Dovecot and Roundcube, configured it using this tutorial: http://ubuntuguide.org/wiki/Mail_Server_setup And after hours of configuring it doesn't work. I have experience with Linux and web hosting, but I successfully configured mail server once in past on Debian 6, and that with help from there. When I try to send email to me my gmail says: Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the server for the recipient domain mydomain.com by dc147738a1117e4c12273.mydomain.com. [MY_SERVER_IP]. The error that the other server returned was: 554 5.7.1 <[email protected]>: Relay access denied Also I have Webmin control panel which works perfectly, how to configure postfix and dovecot from there? Thanks in advance.

    Read the article

  • Linux EC2 Instance Security Consideration

    - by Amzath
    I am going to host a web site in Amazon EC2 instance which would be a Linux instance. My web application will be developed using PHP, Apache and MySql. As I am new to Linux and Amazon EC2 environment, what are key areas in security should I consider to protect my server? This may be very very generic question as the security itself a vast area. But I need to kick start with most imporant points. That way I would be able to track down all those areas one by one.

    Read the article

  • Linux EC2 Instance Security Consideration

    - by Amzath
    I am going to host a web site in Amazon EC2 instance which would be a Linux instance. My web application will be developed using PHP, Apache and MySql. As I am new to Linux and Amazon EC2 environment, what are key areas in security should I consider to protect my server? This may be very very generic question as the security itself a vast area. But I need to kick start with most imporant points. That way I would be able to track down all those areas one by one.

    Read the article

  • How to display programs, started by TSWA Remoteapp, inside a browser instead of directly on the desk

    - by richardboon
    For those not familiar with Terminal Services Web Access and Resulting Internet Communication in Windows Server 2008, here’s a brief overview: technet.microsoft.com/en-us/library/cc754502(WS.10).aspx The problem (I am trying to solve), can be seen in the picture of step 16, where the application is display directly right on the desktop [see link below]: http://blogs.technet.com/askcore/archive/2008/07/22/publishing-the-hyper-v-management-interface-using-terminal-services.aspx I am in the process of setting up Terminal Service Web Access RemoteApp for our company. Users only want remoteapp and needs to see the remote program running within/contain-inside the browser. They don’t want to see or access the whole desktop [as the case with remote desktop, which can be displayed inside a browser].

    Read the article

  • Apache server in raspberry PI not visible from outside( public IP)

    - by Kronos
    I have made a fresh install of Arch Linux ARM into a Raspberry PI and I mounted there a LAMP, all fresh. I have another Arch(x86) in my laptop with Apache also there, and as far as I know, two web servers cannot run in the same network segment so, the problem is as follows. I my laptop, having Apache running, if I enter via the public ip of my network everything turns ok and I can see my website but, (obviously turning this server down) if I enter from the public IP with the Apache running in the raspberry pi( yes, only that Apache running) i cannot see my website in there. Also, if I access via local network it is a normal success, I can see my website. So, I can enter my raspberry website only via local but in my other web server i can enter it via local and public. I have the same conf files in both of them so what is the difference? I was planning in making the rpi as a development server. Thanks in advance

    Read the article

  • xinet vs iptables for port forwarding performance

    - by jamie.mccrindle
    I have a requirement to run a Java based web server on port 80. The options are: Web proxy (apache, nginx etc.) xinet iptables setuid The baseline would be running the app using setuid but I'd prefer not to for security reasons. Apache is too slow and nginx doesn't support keep-alives so new connections are made for every proxied request. xinet is easy to set up but creates a new process for every request which I've seen cause problems in a high performance environment. The last option is port forwarding with iptables but I have no experience of how fast it is. Of course, the ideal solution would be to do this on a dedicated hardware firewall / load balancer but that's not an option at present.

    Read the article

  • Has the hardware in my modem gone bad?

    - by Tyler Scott
    I contacted CenturyLink about my modem recently and received useless and unrelated information. The problem seems to be that the modem will no longer save settings, the web interface is unusable except in internet explorer for some reason, and the modem keeps resetting. CenturyLink claimed it had to do with signal strength but I checked and it is currently between good and outstanding according to this. All of the lights remain green even when it starts acting up and I lose internet and shortly before it crashes and reboots. Does anyone have any idea what is going on or what I can do to fix it? (Asking CenturyLink again is obviously not going to help.) Update 1: Accessing the syslog from the web interface causes a crash. After it reboots, the log looks like as follows: 01/01/1970 12:01:29 AM Ethernet Ethernet client connected ,ip(192.168.0.2), mac(1c:6f:65:4c:6d:3b) 01/01/1970 12:01:38 AM Wireless 802.11 client connected ,ip(192.168.0.18), mac(d0:df:c7:c2:73:ca) 01/01/1970 12:01:41 AM System Event Line 0: VDSL2 link up, Bearer 0, us=20128, ds=40127 01/01/1970 12:01:43 AM dhcp6s[2028] dhcp6_ctl_authinit: failed to open /etc/dhcp6sctlkey: No such file or directory 01/01/1970 12:01:50 AM dhcp6s[2469] dhcp6_ctl_authinit: failed to open /etc/dhcp6sctlkey: No such file or directory 01/01/1970 12:01:52 AM radvd[2306] poll error: Interrupted system call 01/01/1970 12:01:56 AM PPP Link PPP server detected. 01/01/1970 12:01:56 AM PPP Link PPP session established. 01/01/1970 12:01:56 AM PPP Link PPP LCP UP. 01/01/1970 12:01:56 AM System Event Received valid IP address from server. Connection UP. 06/05/2014 08:16:01 AM radvd[2511] poll error: Interrupted system call 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:04 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:04 AM dhcp6s[3236] dhcp6_ctl_authinit: failed to open /etc/dhcp6sctlkey: No such file or directory 06/05/2014 08:16:08 AM Wireless 802.11 client connected ,ip(192.168.0.7), mac(44:6d:57:c4:d7:08) I also get it to crash on various other pages. I am guessing the web server is unstable.

    Read the article

  • SonicWall and Windows CA

    - by Nathan C
    I'm attempting to import a certificate created by a CA I've set up in Windows using AD CS. I've done the following: 1) Created my own CA (MyCompany) 2) Enabled web services (mostly for ease of configuration) 3) Generated a certificate request on the Sonicwall itself 4) Used web services to sign the certificate 5) Imported the sign certificate into the Sonicwall ...this caused the certificate to show "No" for the Verified field. 6) Imported the CA's certificate. This is where I get stuck. I attempted to import the CRL list, but get the following error: CRL Error - Verification failed using CA certificate. No further errors appear in the logs. Without the CRL list the certificate won't verify and it doesn't appear under the "Administration" page so I can select it for use via HTTPS. Any ideas?

    Read the article

  • Save static version of a webpage to be available offline

    - by Tal Weiss
    Is there a way to save a webpage as static HTML, available for offline viewing, editing etc.? I want to remove all javascript files. Leave only HTML, CSS and images. For example, if this web page has a Facebook Like button, I want the image of the button to be part of the HTML as a regular image (and not be loaded as some Javascript code runs after I load the page). I'm trying to prepare a webpage for an offline demo. When I use the standard "save as HTML complete" like tools, all the Javascript is saved as well, and when viewed offline, all the dynamic content is blanks. Note- I don't expect the dynamic content to work, of course, with no Javascript. I just want the web page to LOOK as though it was just loaded from the interwebs. Thanks, Tal.

    Read the article

  • Linux FHS: /srv vs /var ... where do I put stuff?

    - by wag2639
    My web development experience has started with Fedora and RHEL but I'm transitioning to Ubuntu. In Fedora/RHEL, the default seems to be using the /var folder while Ubuntu uses /srv. Is there any reason to use one over the other and where does the line split? (It confused me so much that until very recently, I thought /srv was /svr for server/service) My main concern deals with two types of folders default www and ftp directories specific application folders like: samba shares (possibly grouped under a smb folder) web applications (should these go in www folder, or do can I do a symlink to its own directory like "_/www/wordpress" - "/srv/wordpress") I'm looking for best practice, industry standards, and qualitative reasons for which approach is best (or at least why its favored).

    Read the article

  • Advise about performance for local or remote SQL Server?

    - by TruMan1
    I currently have my web server and SQL Express / MySQL server on the same server. It is on a VPS. I have been having problems with my hosting so I am thinking of separating the web and db server into 2 VPS servers. Does anyone recommend this? I am worried that changing my setup from a local DB server to a remote one will degrade performance heavily. They will not be on the same network, but will reference each other via an IP address. Anything I should be aware of?

    Read the article

  • What Defines an AD Object as "Inactive"

    - by Malnizzle
    I am going to be using some DSQUERY/DSMOVE scripts to clean up my AD Domin. One option is to move inactive objects to a OU that has restrictive GPOs applied to it. Something like: DSQUERY computer -inactive 10 | DSMOVE -newparent <distinguished name of target OU> My question is what value defines an object, both user and computer, as "inactive" for a period of time? Is it the last time a computer was logged on to for computer accounts, and for users is it the last time that the user account logged on to a computer? But what if, say for example, I had a web server that wasn't rebooted and or logged into for a couple of months but remain powered on and functioning as normal, would it be defined as "inactive" where as technically it's still serving web pages and so on? Thanks for the help!

    Read the article

  • IIS 401.3 - Unauthorized on only 1 server out of 3 set up for network load balancing

    - by Tony
    Over the weekend our Server Admin set up two virtual Windows 2008 machines with IIS installed and set them up under NLB. I came in and changed the application pool the website was running under to our domain account that has proper access to the database and the file share hosting our .NET web application Sitefinity, and changed it to .NET 4 Integrated. NLB and everything was running fine on both servers. He brought up the third server for our cluster on Tuesday and I performed the same actions.. The only difference was that I was given admin rights for the third server so I could set it up remotely instead of going to his office. He has full control over the share and NTFS perms on \\hostname\Sitefinity and I believe I only had read access. I pointed the web site to the same \\hostname\Sitefinity\sitename share that the others were on and the authentication/authorization test settings passed. I hit the site from http://localhost (like I did successfully from the other two before trying the cluster's IP address) and I received a HTTP Error 401.3 - Unauthorized. I've verified many times that the application pool is running under the same service account. I tried hitting just a simple test.htm.. works fine on both of the first two servers but I get the same 401.3 on the third. I copied my dev project to the local inetpub directory and re-pointed the website and that ran perfectly. I turned on Failed Request Tracing and it acts like it's still running the local IUSR account I guess (instead of my domain account)? Here is an excerpt of the File Cache Access Start and the error from the trace: FileName \\hostname\sitefinity\sitename\test.htm UserName IUSR DomainName NT AUTHORITY ---------- Successful false FileFromCache false FileAddedToCache false FileDirmoned true LastModCheckErrorIgnored true ErrorCode 2147942405 LastModifiedTime ErrorCode Access is denied. (0x80070005) ---------- ModuleName IIS Web Core Notification 2 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 3 ErrorCode 2147942405 ConfigExceptionInfo Notification AUTHENTICATE_REQUEST ErrorCode Access is denied. (0x80070005) ---------- My personal AD account was then granted read/write perms to the share so I created a new application pool and set the site under it in case there was an issue with the application pool but no success. I created another under my own account and it still failed. It just seems like maybe it's not trying to access the files under the account my application pools are running under although that's the only way I've done things before. I set the Physicial Path Credentials in Advanced Settings on the site to the service account and it threw a 500 error of some sort so I assume that's not the answer (and I don't have to do it on the other servers). It's like somehow I'm trying to force impersonation on the IUSR account or something?

    Read the article

  • How do i find out what's preventing delete requests from working in iis7.5 and iis8?

    - by Simon
    Our site has an MVC Rest API. Recently, both the live servers and my development machine stopped accepting DELETE requests, instead returning a 501 Not Implemented response. On my development machine, which is Windows 7 running IIS7.5, the solution was to add these lines to our Web.config, under system.webServer / handlers: <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> ... <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE" type="System.Web.Handlers.TransferRequestHandler" resourceType="Unspecified" requireAccess="Script" preCondition="integratedMode,runtimeVersionv4.0" /> However, this didn't work on any of our live servers; not on Sever 2008 + IIS7.5 and not on Server 2012 + IIS8. There are no verbs set up in Request Filtering, and WebDAV is not installed on any of our live servers. The error page gives no further information, and nothing gets recorded in the logs. How do I find out what's preventing DELETE requests from working in iis7.5 and iis8?

    Read the article

  • mod_deflate Supported Encodings for Compression

    - by sparc
    It seems to me, that mod_deflate in Apache 2.2 will always return: Content-Encoding: gzip and never: Content-Encoding: deflate It was explained to me, that although there may be a deflate algorithm, mod_deflate is named after a file-format, in which the algorithm could be any of: gzip, bzip. pkzip Of those three, mod_deflate provides gzip. It seems as though gzip is the most popular and widely-supported algorithm in web browsers, but I know some web servers and proxies do return Content-Encoding: deflate. Aside from the confusion of the module's name, it true that mod_deflate will only return Content-Encoding: gzip? Thank you.

    Read the article

  • How do the routers communicate with each other ?

    - by Berkay
    Let's say that i want make a request a to a web page which is hosted in Europe (i live in USA).My packets only consist the IP address of the web page, first the domain name to ip address transformation is done, then my packets start their journey through to europe. i assume that MAC addresses never used in this situation? are they? First, my packets deal with many routers on way how these routers communicate with each other?, are router addresses added to my packet headers ? Second, is there a specific path router to router comminication or which conditions affect this route? Third to cross the Atlantic Ocean, are cables used or... ?

    Read the article

  • httpd.conf for case-insensitive file serving

    - by Anton Gogolev
    I'm a complete newbie with regard to managing Apache, so excuse me if I'm phrasing something incorrectly. I have a web site -- say, http://domain.com. The problem is that when I try to open http://domain.com/index.html in a web browser it displays the page, but when I attempt to access http://domain.com/Index.html (note capital I), it responds with HTTP 404. How do I configure Apache to serve both these files (and directories, for that matter) in a case-insensitive manner? Current httpd.conf is here. EDIT Dan C, thanks for a hint. I basically want to allow users to download files from my server and don't really want them to be aware that Index.html and index.html are in fact different. I'm also very willing to know as to what are the ramifications of this decision.

    Read the article

  • How to configure custom error page in Plesk 9.3 for non existing folder?

    - by Junior Mayhé
    I'm trying to configure Plesk in order to show website visitors a custom error html. The current hosted site is an ASP.NET site. This site shows its custom errors on error403.aspx and error404.aspx files. Now to comply with plesk, I've created error_docs with required files like forbidden.html, etc... When user try to navigate http://mysite.com/a_missing_page.aspx, the visitor is redirected to error404.aspx correctly. But when user try to navigate to a non existent directory http://mysite.com/a_missing_folder/ the site takes me to IIS 404 regular page. Plesk has Custom error documents activated on Web hosting settings. ASP.NET Error pages defined in web.config are showing fine. But it seems plesk wont show its custom html error documents. The bottom line here is about setting up a custom error page to a directory. Is it possible to do this using Plesk or do I have to change it manually on IIS?

    Read the article

  • Is it possible to set the name of the current virtual desktop via commandline?

    - by Dave Vogt
    The utility wmctrl has the possiblity to list the names of all virtual desktops: % wmctrl -d 0 - DG: 3360x1200 VP: 0,0 WA: 0,0 3360x1199 Mail / Comm 1 * DG: 3360x1200 VP: 0,0 WA: 0,0 3360x1199 Web / Docs 2 - DG: 3360x1200 VP: 0,0 WA: 0,0 3360x1199 A 3 - DG: 3360x1200 VP: 0,0 WA: 0,0 3360x1199 B I would like to be able to change, from the commandline, the name of the current desktop to something else. This is possible by using some pagers, for example, but I couldn't find out how to do it from the command line. Update: the xprop utility seems to be able to set the desktop names, but I could not figure out the exact format to do so, yet: % xprop -root -f _NET_DESKTOP_NAMES 8s -set _NET_DESKTOP_NAMES asdf % xprop -root _NET_DESKTOP_NAMES _NET_DESKTOP_NAMES(UTF8_STRING) = "asdf", "Web / Docs", "A"

    Read the article

< Previous Page | 866 867 868 869 870 871 872 873 874 875 876 877  | Next Page >