Search Results

Search found 29101 results on 1165 pages for 'open basedir'.

Page 936/1165 | < Previous Page | 932 933 934 935 936 937 938 939 940 941 942 943  | Next Page >

  • Home network with two isolated separate subnets, running on cablemodem/router and WRT-router.

    - by Johan Allgoth
    I have a new connection with a nice new router/cable-modem. I'd like to setup it up optimally and needs some pointers. I am a complete n00b when it comes to routing. I want to end up with two separate subnets, 10.1.2.0/24 and 192.168.1.0/24 each available on their own wireless channel/SSID. Both firewalled. I want my wired computers on the gigabit switch, optimally with public ips. I want to be able to reach 192.168.1.0/24 from 10.1.2.0/24, but not vice versa. Everyone should have internet access. Hardware and capabilities: Netgear CG3100. Handles cable connection. Gigabit switch. 802.11n. Can do DHCP, firewall, NAT etc. Can choose subnet. Can turn of NAT and if so hand out up to 4 public ips. Somewhat challenged when it comes to configuration. WRT-router. Runs DD/Open-WRT very stable. 100 Mbit switch. 802.11.g Can do DHCP, firewall, NAT etc. Can choose subnet. Highly configurable. I hope to be able to keep 10.1.2.0/24 on the CG3100, for speed reasons and 192.168.0.0/24 on the WRT-router for quota and user control reasons. On my 10.1.2.0/24 network I plan on running servers for various services. Should I turn of NAT on the WRT-router? Or on the cable modem? Activate what in that case? Is double NAT always f-ed up?

    Read the article

  • Software way to cool down an old MacBook Pro

    - by notMacBookProSuperUser
    Hi all, First a little background: I've got lots of computers, including Linux PCs and two MacBook Pro (and a MacMini). My concern is with my 'old' MacBookPro (Core Duo). It really does overheat. Warranty is long void. Years ago (I'd say 2.5 years ago or so) one day it overheated so bad that the battery inflated due to the heat. I got a new battery for free but it's still getting incredibly hot (much other than any other computer I've got: my newer Core 2 Duo MacBook Pro doesn't get nearly as hot as the old one. It s really a pain because I use my old MBP when I m in front of TV, having it on my lap, and it can really become unbearable. I don't want to open that old MBP. On Linux I can force a new CPU 'governor' that decides how the CPU is allowed to operate: it can be 'on demand', 'always max speed', 'always speed x', etc. Does the same exist under MacOS X? Is there a way, say if a 1.86 Ghz Core Duo can run at 1.6 Ghz, to ask MacOS X: "never run this CPU above 1.6 Ghz" ?

    Read the article

  • Correctly setting up UFW on Ubuntu Server 10 LTS which has Nginx, FastCGI and MySQL?

    - by littlejim84
    Hello. I'm wanting to get my firewall on my new webserver to be as secure as it needs to be. After I did research for iptables, I came across UFW (Uncomplicated FireWall). This looks like a better way for me to setup a firewall on Ubuntu Server 10 LTS and seeing that it's part of the install, it seems to make sense. My server will have Nginx, FastCGI and MySQL on it. I also want to be allow SSH access (obviously). So I'm curious to know exactly how I should set up UFW and is there anything else I need to take into consideration? After doing research, I found an article that explains it this way: # turn on ufw ufw enable # log all activity (you'll be glad you have this later) ufw logging on # allow port 80 for tcp (web stuff) ufw allow 80/tcp # allow our ssh port ufw allow 5555 # deny everything else ufw default deny # open the ssh config file and edit the port number from 22 to 5555, ctrl-x to exit nano /etc/ssh/sshd_config # restart ssh (don't forget to ssh with port 5555, not 22 from now on) /etc/init.d/ssh reload This all seems to make sense to me. But is it all correct? I want to back this up with any other opinions or advice to ensure I do this right on my server. Many thanks!

    Read the article

  • SSH multi-hop connections with netcat mode proxy

    - by aef
    Since OpenSSH 5.4 there is a new feature called natcat mode, which allows you to bind STDIN and STDOUT of local SSH client to a TCP port accessible through the remote SSH server. This mode is enabled by simply calling ssh -W [HOST]:[PORT] Theoretically this should be ideal for use in the ProxyCommand setting in per-host SSH configurations, which was previously often used with the nc (netcat) command. ProxyCommand allows you to configure a machine as proxy between you local machine and the target SSH server, for example if the target SSH server is hidden behind a firewall. The problem now is, that instead of working, it throws a cryptic error message in my face: Bad packet length 1397966893. Disconnecting: Packet corrupt Here is an excerpt from my ~/.ssh/config: Host * Protocol 2 ControlMaster auto ControlPath ~/.ssh/cm_socket/%r@%h:%p ControlPersist 4h Host proxy-host proxy-host.my-domain.tld HostName proxy-host.my-domain.tld ForwardAgent yes Host target-server target-server.my-domain.tld HostName target-server.my-domain.tld ProxyCommand ssh -W %h:%p proxy-host ForwardAgent yes As you can see here, I'm using the ControlMaster feature so I don't have to open more than one SSH connection per-host. The client machine I tested this with is an Ubuntu 11.10 (x86_64) and both proxy-host and target-server are Debian Wheezy Beta 3 (x86_64) machines. The error happens when I call ssh target-server. When I call it with the -v flag, here is what I get additionally: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /home/aef/.ssh/config debug1: Applying options for * debug1: Applying options for target-server.my-domain.tld debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: auto-mux: Trying existing master debug1: Control socket "/home/aef/.ssh/cm_socket/[email protected]:22" does not exist debug1: Executing proxy command: exec ssh -W target-server.my-domain.tld:22 proxy-host.my-domain.tld debug1: identity file /home/aef/.ssh/id_rsa type -1 debug1: identity file /home/aef/.ssh/id_rsa-cert type -1 debug1: identity file /home/aef/.ssh/id_dsa type -1 debug1: identity file /home/aef/.ssh/id_dsa-cert type -1 debug1: identity file /home/aef/.ssh/id_ecdsa type -1 debug1: identity file /home/aef/.ssh/id_ecdsa-cert type -1 debug1: permanently_drop_suid: 1000 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.0p1 Debian-3 debug1: match: OpenSSH_6.0p1 Debian-3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug1: SSH2_MSG_KEXINIT sent Bad packet length 1397966893. Disconnecting: Packet corrupt

    Read the article

  • How to test server throughput

    - by embwbam
    I've always used apache benchmark to try to get a rough idea of how many requests/second my server can handle. I read that it was good, and it seemed to work well. Enter node.js, which is fully event-based, so it never blocks. If I run apache benchmark on a simple hello world server it can handle 2500 requests per second or so. However, if I put a timeout in the hello world function, so that it responds after 2 seconds, apache benchmark reports a dramatically reduced throughput: about 50/s. I'm running 100 concurrent connections with ab. If I increase the concurrency, it goes up. This makes sense, because apache benchmark is basically sending out requests in batches of 100, which come back every 2 seconds. 100 requests / 2 seconds = 50 requests / second If I increase the concurrency to about 400 or 500, it starts to crash. I don't think I've hit node.js's limit, I think I'm hitting a wall in my operating system on the number of open file descriptors or sockets or something. Any way I can get a good guess about how many requests my server can handle? I want to make sure the test computer isn't the one causing the problem.

    Read the article

  • Building a Web proxy to get around same-origin restrictions for collaborative Webapp based on a MEAN stack

    - by Lew Cohen
    Can anyone point to books, articles, blogs, or even applications - open-source or proprietary - that detail building a Web proxy? This specific proxy will exist to get around the same-origin restrictions that prevent, for instance, loading a given Website into an <iframe> in a Webapp. This Webapp is a collaborative application in which a group of users log in to the app's Website and can then load different Websites into this app's <iframe> and do various collaborative things (e.g., several users simultaneously browsing a Website, in synch). The Webapp itself is built on a MEAN stack (MongoDB, Express, AngularJS, and Node.js). The purpose of this proxy is not to do anonymous browsing or to bypass censorship. Information on how to build such a vehicle seems not to be readily available from my research. I've come across Glype but am not sure whether this is a feasible solution. I don't want to reinvent the wheel, so if a product is available for purchase, great. Else, we'd need to build one. The one that seems to be close is http://www.corsproxy.com. In effect, we'd like to re-create this since it evidently does what's needed. I don't care what server-side technology is used. Our app is MEAN-based, if that has any bearing. Also, the proxy has to obviously honor basic security considerations (user cookies, etc.) and eventually be scalable. So, anyone know of any sources that would detail how to build one of these? Is it even worth building if something already exists? If so, what would be a good candidate? Any other issues that should be considered with this proxy/application? Thanks a lot!

    Read the article

  • How to set up port forwarding on a dedicated server running CentOS 5.4 to use Ubuntu 9.0.4

    - by mairtinh
    The basic situation that I have is a dedicated server running CentOS 5.4 At the moment I have one VM running Ubuntu 9.0.4. Later on, I will want to add another VM running Windows Server 2003 but at the moment I am focusing on getting Ubuntu up and running. The Ubuntu installation is working fine but I'm seriously struggling to get port forwarding working so that I can access websites to be hosted on the Ubuntu VM. As a newbie to Linux, I am confused about the relationship between IPTables and VMWare's own port forwarding. Here's what I've tried so far. The IP of my server is xxx.xxx.xxx.xxx and the provider support have told me that the subnet mask is 255.255.255.0, the gateway address is xxx.xxx.xxx.1 and the network address is xxx.xxx.xxx.0. (Those latter two surprise me a bit, I expected private gateway/network address rather than public ones.) First of all I tried Bridged Networking but had no success at all in communicating with the machine other than through the VMware console. I tried pinging it from the host (using ssh into the host) but no joy; also no Inernet access from the VM. I changed the interfaces configuration from DHCP to Static, using a static address of 192.168.1.100 and setting the gateway to xxx.xxx.xxx.1 as advised by the provider. No real difference, still cannot ping the guest from the host or vice versa and no Internet access from the guest. Then I tried NAT. The host automatically set the IP address to 192.168.132.128 with a gateway of 192.168.132.2 Now the guest has Internet access out and when I do a VNC to the host and open Firefox with 192.168.132.128 I can see the hosted website okay but I still cannot get into it from outside. I mentioned that I'm a bit confused about IPtables and VMware port forwarding, what I meant is that I'm not sure whether IPtable forwarding should be set to the IP address of the guest interface (192.168.132.128 in this case) or the gateway address 192.168.132.2 . I have a feeling that I'm missing something very simple here, can anybody tell me what it is?

    Read the article

  • knife on Windows inconsistently reads ~\.ssh\knife.rb on Management Workstation

    - by gWaldo
    I am implementing a new instance of (Open-source v10.12) Chef in an existing environment. Currently the environment is mostly Windows, but more Linux is being introduced. I have used Chef in a previous gig, however that was a *nix-only environment. Because this is a primarily-Windows environment, my main workstation is Windows 7 (x64), and I use Powershell as my main terminal. I created a ~\.chef directory, populated with a knife.rb and my client.pem file. When I run knife client list from ~, I get the expected results. I keep my work in Dropbox just in case my laptop should fail or be stolen. When I run knife client list from the repo directory (C:\Users\waldo\Dropbox_company\projects\chef`), I get ERROR: Your private key could not be loaded from C:/home/waldo/.chef/waldog.pem Check your configuration file and ensure that your private key is readable (Note that the path is incorrect) This is the progression as I walk up the tree towards my ~ running knife client list: C:\Users\waldo\Dropbox\_company\projects\ => Above error C:\Users\waldo\Dropbox\_company\ => Above error C:\Users\waldo\Dropbox\ => It works! (Expected results) C:\Users\waldo\ => Expected results C:\Users\waldo\Documents\ => Expected Results C:\Users\waldo\Documents\GitHub => Expected Results C:\Users\waldo\Documents\GitHub\aProject\ => Expected Results What. The. Eff! Now, I know that I can add -c path\to\knife.rb, but that's a HUGE PITA. Question is: Why is knife inconsistently reading my ~\.chef\knife.rb, and how can I get around that without incurring carpal tunnel?

    Read the article

  • What is easiest no fail way to publish asp.net app?

    - by Maestro1024
    What is easiest no fail way to publish asp.net app? Sorry a bit of an open ended question but I am having issues deploying an asp.net report project and any solution to get the site up is fine. I am running Win7/SQL 2008 and want to publish a asp.net report site that I created in VS 2008. Website launches when I run in debug in Visual studio but I want to publish the site so that it can be seen on the LAN. I published the files off to a folder and started up the IIS manager and added a new site and pointed to that folder. Set the permission on the folder to share to everyone. However when I go to the DNS name I put in for the website it does not launch. Any ideas on this? I see websites out there talking about a web sharing tab on the folder properties but I do not see that when I go to folders. Why might that be? Another avenue I have not pursued yet is publishing directly to a website. Has anyone tried that? Is that better or worse than publishing to filesystem?

    Read the article

  • Need help with local network printing while using VPN on Ubuntu 10.10 desktop

    - by MountainX
    I can print to my HP printer via the LAN when I'm not connected to the VPN. When connected to the VPN, printing fails. OpenVPN 2.1.0 x86_64-pc-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [MH] [PF_INET6] [eurephia] built on Jul 12 2010 I can ping the printer while connected to the VPN: $ ping 192.168.100.12 PING 192.168.100.12 (192.168.100.12) 56(84) bytes of data. 64 bytes from 192.168.100.12: icmp_req=1 ttl=255 time=9.17 ms --- 192.168.100.12 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss... $ ping HpPrinter.local PING HpPrinter.local (192.168.100.12) 56(84) bytes of data. 64 bytes from HpPrinter.local (192.168.100.12): icmp_req=1 ttl=255 time=0.383 ms --- HpPrinter.local ping statistics --- 4 packets transmitted, 4 received, 0% packet loss... But here's the error when I try to print while connected to the VPN: hpijs[9990]: io/hpmud/jd.c 784: mdns lookup HpPrinter.local retry 1... ... hpijs[9990]: io/hpmud/jd.c 784: mdns lookup HpPrinter.local retry 20... hpijs[9990]: io/hpmud/jd.c 780: error timeout mdns lookup HpPrinter.local hpijs[9990]: io/hpmud/jd.c 88: unable to read device-id hp[9982]: io/hpmud/jd.c 784: mdns lookup HpPrinter.local retry 1... ... hp[9982]: io/hpmud/jd.c 784: mdns lookup HpPrinter.local retry 20... hp[9982]: io/hpmud/jd.c 780: error timeout mdns lookup HpPrinter.local hp[9982]: io/hpmud/jd.c 88: unable to read device-id hp[9982]: prnt/backend/hp.c 745: ERROR: open device failed stat=12: hp:/net/Officejet_Pro_L7600?zc=HpPrinter I am running iptables rules, but the problem doesn't appear related to the firewall. I've tested with no rules (i.e., no firewall). The printing problem happens when the VPN is connected. I can guess it is an mdns problem, but searching google about mdns didn't turn up anything that seemed related to this (at my level of knowledge). Any suggestions?

    Read the article

  • ISA 2000 and COD MW2 Steam

    - by twlichty
    OK, so maybe not the "proper use" of network resources, but we enjoy the odd COD game during lunch hours. When we played COD4, we had a dedicated server setup at the back of the server room. With MW2, we need to be able to connect to steam to be able to play multi-player. I've found this support article here: https://support.steampowered.com/kb%5Farticle.php?ref=8571-GLVN-8711 Which outlines all the ports I need to open. I went through and created the following rules in ISA 2000 (I'm stuck with 2000 for now). Protocol Definition: Steam Primary connection: Port 27000, UDP, Send Receive Secondary Connection: Port range 27001-27030 Send Receive Protocol Definition: Steam TCP In Primary connection: 27014, TCP, Inbound Secondary Connection: Port range: 27015-27050, Inbound Protocol Definition: Steam 4380 Primary connection: 4380, UDP, Send Receive When I start steam on my local workstation (I did add an exception to the Vista Firewall to allow steam), the steam client sits on "Updating Steam" for 5 minutes then errors out with: You must connect to the internet first. Any ideas? I assume I missed something. Thanks for your help.

    Read the article

  • Deleting certain files sits at "preparing to recycle" on Windows 7?

    - by Rachel
    We recently setup one of our users with a brand new Windows 7 computer, however she is unable to delete certain files. With some testing, I found I cannot move, rename, or view properties of these files either. When trying to delete the file, it just sits at the "Preparing to recycle" popup, however the "from" section says "Discovering items..." Clicking "More Details" on the popup shows me that it can't find the file name or where it's recycling from: Other notes... All the affected files are .pdf files that get created via a scanner. Other pdf files are fine. Opening the files works fine. I can open the file, Save As a new file, and delete the new one just fine Trying to delete the file via command prompt just sits there Rebooting the computer will let me manipulate the files like normal, however this user is responsible for scanning hundreds of documents a day and I'd rather not have to tell her to reboot her computer to delete files. The user is part of the administrator group on the computer The Owner of the affected files is the user attrib of files is just A

    Read the article

  • Weird caching bug where old version of the same web page (same filename) is still called (Windows 2008 R2, Tomcat 5.5)

    - by user717236
    This is definitely one of the strangest errors I've seen and it occurs intermittently. I am running Windows 2008 R2, IIS 7.5, and Apache Tomcat 5.5, by the way. Let's say I have two machines, A and B. Both A and B are running Windows 2008 R2. I have a web page called login.jsp on machine A, and I have a newer, modified version of login.jsp on machine B . Now, I copy the new login.jsp from machine B and paste it to machine A, replacing the older version with the same filename. For whatever reason, when I hit up the web page in my browser from a local machine (i.e. my laptop), it still recalls the old version of the web page, even though it's been replaced! I tried restarting IIS and Apache Tomcat. That didn't work. I tried restarting machine A and that didn't work. I tried a cold reboot of my local machine and that didn't work, either. So, I spoke to someone I can confide in for help. He said to open the login.jsp page in notepad, put a space in, save the file, and try again. Sure enough, it worked. He said he hasn't seen it in Windows 2003, but this is occurring with Windows 2008. What I don't understand is why did it work and what the heck is this error and I do I really diagnose it and resolve it for good, instead of the hack my colleague proposed? Is this bug related to Windows 2008, Windows 2008 R2, Tomcat, or something else entirely? Anyone else have the same problem? Thank you for any help.

    Read the article

  • "Server Unavailable" and removed permissions on .NET sites after Windows Update

    - by tags2k
    Our company has five almost identical Windows 2003 servers with the same host, and all but one performed an automatic Windows Update last night without issue. The one that had problems, of course, was the one which hosts the majority of our sites. What the update appeared to do was cause the NETWORK user to stop having access to the .NET Framework 2.0 files, as the event log was complaining about not being able to open System.Web. This resulted in every .NET site on the server returning "Server Unavailable" as the App Domains failed to be initialise. I ran aspnet_regiis which didn't appear to fix the problem, so I ran FileMon which revealed that nobody but the Administrators group had access to any files in any of the website folders! After resetting the permissions, things appear to be fine. I was wondering if anyone had an idea of what could have caused this to go wrong? As I say, the four other servers updated without a problem. Are there any known issues involved with any of the following updates? My major suspect at the moment is the 3.5 update as all of the sites on the server are running in 3.5. Windows Server 2003 Update Rollup for ActiveX Killbits for Windows Server 2003 (KB960715) Windows Server 2003 Security Update for Internet Explorer 7 for Windows Server 2003 (KB960714) Windows Server 2003 Microsoft .NET Framework 3.5 Family Update (KB959209) x86 Windows Server 2003 Security Update for Windows Server 2003 (KB958687) Thanks for any light you can shed on this.

    Read the article

  • Generating new SID for Windows 7 cloned partition in Linux?

    - by Jack
    So I've read that the proper way to clone a Windows 7 partition is to run a Sysprep after the clone is complete. For MANY reasons, this is not possible the way we are cloning these drives (long story short, the drive should be fully up and running after we clone it, with all the settings already there and requiring no user intervention; and no, not even an answer file would work because the way we customize all the Win7 settings is complex and we do not want the user touching the settings). I understand Microsoft will not support Windows 7 clones if it is not sysprepped and that is fine for us. Acronis recovery tools get around this by ticking an option called "Create new NT signature", which resets the SID and GUID on any restore. Symantec has a tool called Ghostwalker which does the same thing. However, we are looking for a way to do this in Linux because we want to use open source tools to do the imaging (fsarchiver, partclone, etc. basically the same tools Clonezilla uses internally to clone NTFS partitions). The question is, if we clone using these tools in Linux, how would we generate a new SID thereafter (without the use of sysprep)? Is there any way to do it within a Linux environment? The whole image process is automated so if it is a simple command that I can just throw in my shell script, that would be even better. Of course, it would be nice to know if this is even possible. Any ideas? EDIT: Forgot to mention that the target machines we are restoring the image on are EXACTLY the same.

    Read the article

  • Photoshop CS4. How do I make sure my color stays the same in my different .psd files? (could be RGB

    - by Kris
    I asked 2 photoshop experts I know but they haven't got a clue because all my settings are exactly the same, in both files. (except the RGB type !! I'm not sure. Please read on) I use RGB color, 72DPI, 8 bits / channel. No adjustments (filters, like greyscale, etc ...) are selected / used. The layers are both normal, and opacity and fill are 100% (yes, in both files). I took two screenshots, and you can see the difference: http://www.flickr.com/photos/30465871@N05/4623864297/ http://www.flickr.com/photos/30465871@N05/4624469754/ Both colors are ff795d, but that doesn't matter, any color I use gives me the same problem: they both look different. Now, I know the CMYK settings (see screenshots) are different, but when the settings are the same the color changes. Why is this happening and how do I solve this problem? My guess is I'm working with different a type of RGB. It's sRGB IE61966 - 2.1 in the file I created (file info raw data) but I can't find that in the file that started with a screenshot. If that's the problem, how do I change / convert the RGB, once the file is already open? Thank you.

    Read the article

  • Best shortcut in Total Commander

    - by life-warrior
    So, what's your favourite TC shortcut or shortcut combination ? Which one do you use and for what purpose ? Among my most often used: Ctrl-Left ( or Ctrl-Right ) - open archive or folder under cursor in opposite tab. Ctrl-Shift-Enter, Alt-F8, Ctrl-X - copy full file path to clipboard. Shift-F6, Shift-End(if needed), Ctrl-C - copy only file name w/o path. Select files, Ctrl-M - multi-rename, for example remove "DVDrip" from file names. Ctrl-\ - go to root directory. Ctrl-D, - go to directory with highlighted letter specified. For example, name a downloads directory "&Downloads" in favourites, and the letter after ampersand will be highlighted. Alt-F7, feed to listbox, Ctrl-A, Mark(menu)-Save selection to file - creates a file with all files and directories inside current, with full path. Ctrl-[3-6] - sort files by name(3), extension(4), date(5), size(6). For example, Sort by name, when you need movies and soundracks with the same name and different extension to group them together. Sort by extension, when you need to find EXEs in Windows directory. Sort by Date, when you need to find the latest file downloaded in your dir. Sort by size, when you need to delete the largest files for free space.

    Read the article

  • USPTO site asks for Quicktime Plug-in which I already have installed. Why?

    - by Kensai
    Whenever I try to watch the images of a patent in the USPTO site (example) using Firefox, the browser asks me to download the latest Quicktime, manually. This is totally strange because I already HAVE the latest plug-in (it even appears on my Firefox add-ons list). In the past I have only been able to see patent images using Safari. But never with Firefox. Is it a USPTO problem or a Mozilla one? Is there a way to fix the problem? edit: I can't see TIFF images neither with Internet Explorer (both 32-bit and 64-bit versions) nor with Chrome. All these browsers don't know how to open embedded TIFF images because they don't recognize the installed Quicktime plugin. A USPTO conspiracy to promote Safari? Come to think of it, I had this problem in my old computer as well. It had a 32-bit Vista OS, now I have 64-bit Windows 7. I hate TIFF and can't find Mozilla-specific information anywhere.. Arghh, am I the only one here with this freak problem?!

    Read the article

  • Timeout settings for Remote Desktop Sessions to lock

    - by atroon
    Our office uses a Windows 2003 server to provide access to an accounting application. Recently I was asked to increase the amount of time it takes for the session to lock itself and require the entry of the user's password to resume. That seems to be about ten minutes, at present. I am familiar with group policy and have tweaked those settings to scavenge sessions (and thereby licenses) from sessions that have been disconnected (by the user closing the mstsc.exe client or by a network issue). That's simple and straightforward. But I can't find anything in GP to allow a longer time period before the RDP client window goes black and then, when clicked upon, requires a username and password to resume the session. I must admit this would be nice personally as well, since most of my time is spent documenting the application and/or monitoring its database, so I usually have a window open to the terminal server along with the rest of the staff in the accounting center, but I interact with it very little. I usually enter my password 10-15 times per workday, but I'm pretty good at it by now. ;) So, can this timeout period be adjusted, or are we out of luck?

    Read the article

  • Cannot properly read files on the local server

    - by Andrew Bestic
    I'm running a RedHat 6.2 Amazon EC2 instance using stock Apache and IUS PHP53u+MySQL (+mbstring, +mysqli, +mcrypt), and phpMyAdmin from git. All configuration is near-vanilla, assuming the described installation procedure. I've been trying to import SQL files into the database using phpMyAdmin to read them from a directory on my server. phpMyAdmin lists the files fine in the drop down, but returns a "File could not be read" error when actually trying to import. Furthermore, when trying to execute file_get_contents(); on the file, it also returns a "failed to open stream: Permission denied" error. In fact, when my brother was attempting to import the SQL files using MySQL "SOURCE" as an authenticated MySQL user with ALL PRIVILEGES, he was getting an error reading the file. It seems that we are unable to read/import these files with ANY method other than root under SSH (although I can't say I've tried every possible method). I have never had this issue under regular CentOS (5, 6, 6.2) installations with the same LAMP stack configuration. Some things I've tried after searching Google and StackExchange: CHMOD 0777 both directory and files, CHOWN root, apache (only two users I can think of that PHP would use), Importing SQL files with total size under both upload_max_filesize and post_max_size, PHP open_basedir commented out, or = "/var/www" (my sites are using Apache VirtualHosts within that directory, and all the SQL files are deep within that directory), PHP safe mode is OFF (it was never ON) At the moment I have solved this issue with the smaller files by using the FILE UPLOAD method directly to phpMyAdmin, but this will not be suitable for uploading my 200+ MiB SQL files as I don't have a stable Internet connection. Any light you could shed on this situation would be greatly appreciated. I'm fair with Linux, and for the things that do stump me, Google usually has an answer. Not this time, though!

    Read the article

  • Wiring my internet

    - by u8sand
    I have Verizon internet service and am currently using wifi. My router is in the basement and my desktop computer is 2 floors and on the other side of the house above it... Worst possible positioning but that's just how things worked out. My wireless currently is extremely unstable so I've decide to correct the problem by wiring my computer directly. The problem lies here: when redoing the room next to it (when the wall was open) we went ahead and wired some coaxial cable from our attic to our basement (with plenty of slack on both ends, don't ask me why we didn't go ahead and wire a CAT6 cable). The question is: Can I use the coaxial cable to bring me internet connection? Naturally the router (which needs to stay where it is) takes a coaxial cable input and has Ethernet outputs. So maybe I would have to take a ethernet cable, convert to coaxial-coaxial to my computer, convert back to ethernet. Is this even possible to convert from coaxial to ethernet? Or do I have to attempt to go ahead and fish a cat6 cable through my house. I cannot just split the signal because that would require two routers and two networks (which I don't believe would work with one cable-one ISP correct me if I'm wrong). Thanks

    Read the article

  • Setup Firefox to save .pages as .zip automatically

    - by Mike Dtrick
    What do I want to do? I would like Firefox to save files with the .pages extension as .zip files automatically. Scenario You are browsing through your emails and you notice your friend just sent you an email with a file attached (a .pages in this example). Unfortunately, you have a laptop that runs Windows. Your friend continues to send tons of emails with .pages files attached and you are tired of manually saving the files as a .zip file. Ultimately, you would like Firefox to be set up so that the download/file manager recognizes the .pages extension and automatically converts it to a .zip file. What have I done? I have saved files manually by selecting save as "All Files" and setting the extension to .zip. I've gone through Firefox and their documentation and have not found anything on how to complete this task. Why am I doing this? To save time (only a few seconds, not the main reason). I would like to setup a simple solution that "converts" a file automatically without having to recall steps on how to achieve the task manually (for clients who aren't exactly tech savvy). So that clients with Windows can access the files. IMPORTANT NOTE: I am not trying to save the web page, rather an Apple document equivalent to Microsoft Word. UPDATE: The really easy method would be to save one file, right click it, choose properties and open all .pages files up with WinRAR (or any other program that extracts files from a compressed folder). For the sake of learning, I am going to "neglect" this method and continue to do some research on Firefox add-ons. I would still like to have Firefox or the download manager to do the bulk of the work for converting the file.

    Read the article

  • Want to use something like Citrix XenDesktop, Free Alternative?

    - by Chris
    I'm looking to go into IT, general office server management, and it looks like XenDesktop would be a awesome tool to use. If I get it right, you would store a central image of the OS you want to deploy (in an iso file) on the main server. Then use XenDesktop to pull that image down to the client, and it will then boot the OS inside of the virtual machine. Does it download the image of the OS and store it locally (like cloning the VM onto the client?) I'd love to find a free (possibly open source?) alternative to this, I keep on hearing about KVM in Linux and PXE booting a minimalistic OS to use remote KVMs.... Would that be what I'm looking for? Ideally, I'd like a system.. - That allows me to manage one central image for multiple clients (virtualized hardware) - Easily boot a thin client OS that connected to XenDesktop. Would those things be possible with some kind of free alternative? Some guidance would be greatly appreciated.

    Read the article

  • What can I do to prevent system power downs?

    - by Joe King
    Yesterday I was given my brother's old laptop - core i7, 2.67GHz, 8GB RAM, 128GB SSD, Win7 64 bit. It's a Sony Vaio Z11. Approx 18 months old. When running something computationally intensive, the fan starts up and after about 30 secs it just powers itself down with no warning. I guess it is overheating. There is nothing in the event logs to suggest what is causing it - the only thing I see is "the last system shutdown was unexpected" or something similar. This is a problem for me because I use a lot of number crunching apps, which pretty much makes it useless to me. I would like to know if there is anything I can do, other than the obvious things I've done already - open up and clean out dust, re-install the OS. According to my brother, this problem started about 6 months ago when it was already outside warranty. If it's just used for simple things - web browsing, word processing etc, the problem does not occur. Any ideas for what I can do to fix this ? Update: I found that the laptop has 2 hardware settings for graphics: Speed and Stamina - the Speed setting seems to use an nvidia GEforce GT 330M, while the Stamina setting uses an Intel chipset. With the setting on Speed, I can hear the fan the whole time, and the system powers down after a short while (5-10 mins) even just doing basic tasks (browsing this site for example), but doesn't shut down if I just leave it switched on. In this mode it also sometimes just freezes the screen and I have to power off myself. However on Stamina setting it only powers down when doing number crunching and never freezes the screen.

    Read the article

  • Adding a Printer to my Print Server Failing

    - by Rudi Kershaw
    So, on the Windows Server page I read the following. Step 4: Add Network Printers Automatically Print Management (Printmanagement.msc) can automatically detect all the printers that are located on the same subnet as the computer on which you are running Print Management, install the appropriate printer drivers, set up the queues, and share the printers. To automatically add network printers to a printer server Open the Administrative Tools folder, and then double-click Print Management. In the Printer Management tree, right-click the appropriate server, and then click Add Printer. On the Printer Installation page of the Network Printer Installation Wizard, click Search the network for printers, and then click Next. If prompted, specify which driver to install for the printer. So, I have got to this point, made sure the printer (Canon MP620) is on and correctly plugged into the network. However, when I click "Search the network for printers", the wizard doesn't find it. Now, I can't get any further. Is there anything I could be doing wrong? How should I proceed moving forwards?

    Read the article

< Previous Page | 932 933 934 935 936 937 938 939 940 941 942 943  | Next Page >