Search Results

Search found 2513 results on 101 pages for 'ryan scott bardsley'.

Page 31/101 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Local dedicated hosting space (own hardware)

    - by Scott
    Where can I find local dedicated hosting space for my own hardware? I know I can rent dedicated hosting from various companies online, but usually I think that means I'm renting their hardware too. I just need a space with a network connection and a power outlet. That's it. How much would this cost? What would I search for? Is it available easily? Or would it only be the sort of thing huge companies would do? I'm in the greater NYC area. It's for a project I'm working on, but the thing's loud and annoying. I'd be willing to pay a little to get it out of sight and out of mind. I don't even care too much about the quality of the network connection. I'd rather not rent other people's hardware cause it probably would cost a fortune to rent a machine like this (tons of ram).

    Read the article

  • Cisco ASA user authentication options - OpenID, public RSA sig, others?

    - by Ryan
    My organization has a Cisco ASA 5510 which I have made act as a firewall/gateway for one of our offices. Most resources a remote user would come looking for exist inside. I've implemented the usual deal - basic inside networks with outbound NAT, one primary outside interface with some secondary public IPs in the PAT pool for public-facing services, a couple site-to-site IPSec links to other branches, etc. - and I'm working now on VPN. I have the WebVPN (clientless SSL VPN) working and even traversing the site-to-site links. At the moment I'm leaving a legacy OpenVPN AS in place for thick client VPN. What I would like to do is standardize on an authentication method for all VPN then switch to the Cisco's IPSec thick VPN server. I'm trying to figure out what's really possible for authentication for these VPN users (thick client and clientless). My organization uses Google Apps and we already use dotnetopenauth to authenticate users for a couple internal services. I'd like to be able to do the same thing for thin and thick VPN. Alternatively a signature-based solution using RSA public keypairs (ssh-keygen type) would be useful to identify user@hardware. I'm trying to get away from legacy username/password auth especially if it's internal to the Cisco (just another password set to manage and for users to forget). I know I can map against an existing LDAP server but we have LDAP accounts created for only about 10% of the user base (mostly developers for Linux shell access). I guess what I'm looking for is a piece of middleware which appears to the Cisco as an LDAP server but will interface with the user's existing OpenID identity. Nothing I've seen in the Cisco suggests it can do this natively. But RSA public keys would be a runner-up, and much much better than standalone or even LDAP auth. What's really practical here?

    Read the article

  • VMWare Workstation 8 Disk I/O & Hard Faults

    - by Scott
    I have VMWare Workstation 8 installed on a host machine with the following specs: Intel i5 2500k CPU 16GB DDR3 1600 ram 1TB Western Digital Caviar Black HD I have two Windows 7 virtual machines configured (currently running one at a time but will be operating both at once when my 32GB RAM kit arrives in a couple days). Each one is configured with 8GB of RAM and no tweaks/performance customizations or anything done. All of the VMWare settings are the defaults. When I boot into these machines and run various programs (Visual Studio, Outlook, etc), I can hear the disk thrashing quite a bit and checking Resource Monitor, I can see that I'm getting anywhere between 300-800 hard faults per second. From the host machine, it shows they're coming from the VMWare image. If I go to the virtual machine, whatever app I'm currently loading is the image that's causing the hard faults. As I understand it, hard faults are (simply) when an address in memory has been swapped out to the page file and has to be read from the page file instead of from memory. I don't understand why this is happening though. With 8GB of ram on the guest machine and 6.5GB available, what could be causing this? I know Windows 7 supposedly improved on page file management over XP but it seems excessive for this kind of slowdown, disk thrashing and high hard fault count when I have that much free RAM. Is there anything I can to to improve the performance in my guest machines? On the host machine, I can open/run any applications at all and hard faults stays around 0 with low disk I/O.

    Read the article

  • Postgres pgpass windows - not working

    - by Scott
    DB: Postgres 9.0 Client: Windows 7 Server Windows 2008, 64bit I'm trying to connect remotely to a postgres instance for purposes of performing a pg_dump to my local machine. Everything works from my client machine, except that I need to provide a password at the password prompt, and I'd ultimately like to batch this with a script. I've followed the instructions here: http://www.postgresql.org/docs/current/static/libpq-pgpass.html but it's not working. To recap, I've created a file on the client (and tried the server as well): C:/Users/postgres/AppData/postgresql/pgpass.conf, where postgresql is the db user. The file has one line with the following data: *:5432:*postgres:[mypassword] (also tried explicit ip/dbname values, all asterisks, and every combination in between. (I've also tried replacing each '*' with [localhost|myip] and [mydatabasename] respectively. From my client machine, I connect using: pg_dump -h [myip] -U postgres -w [mydbname] [mylocaldumpfile] I'm presuming that I need to provide the '-w' switch in order to ignore password prompt, at which point it should look in the AppData directory on the server. It just comes back with "connection to database failed: fe_sendauth: no password supplied. Any insights are appreciated. As a hack workaround, if there was a way I could tell the windows batch file on my client machine to inject the password at the postgres prompt, that would work as well. Thanks.

    Read the article

  • DNS subdomain problem - Hover.com

    - by Ryan Sullivan
    I use hover.com to manage my domain names. I have having a huge problem with setting a sub-domain to a specific IP address: I want the sub-domain on a particular domain name that I have. I set an A type record for that sub-domain and pointed it towards the IP address; it is not working at all. The thing that is confusing me is that when I set the IP address to a sub-domain on a different domain name it works just fine. Also, I have since deleted the DNS record from the domain that it happened to work on, and when I type that address into a browser it still resolves to the IP I had it set to. I am not sure what is going on at all. If this seems confusing I am sorry, but I am very confused about the whole thing myself. If any clarification is needed, just ask and I will try to clear things up.

    Read the article

  • Accessing the Local System Account to accept a software licence

    - by Ryan French
    Hi All, I have a server at the moment running ColdFusion which is being used to access a windows whois program on the server. Each time I call this command via ColdFusion (using cfexecute) the command times out. I believe the issue is because the first time a user runs the .exe file they are asked to accept the licence. ColdFusion is currently set to run under the Local System account and I am just wondering if it is possible to somehow log into this account and run the program manually via the console so that I can accept the licence. I guess my only other option is to change the account ColdFusion runs under, but I would rather not do this.

    Read the article

  • Looking at desktop virtualization, but some users need 3D support. Is HP Remote Graphics a viable solution?

    - by Ryan Thompson
    My company is looking at desktop virtualization, and are planning to move all of the desktop compute resources into the server room or data center, and provide users with thin clients for access. In most cases, a simple VNC or Remote Desktop solution is adequate, but some users are running visualizations that require 3D capability--something that VNC and Remote Desktop cannot support. Rather than making an exception and providing desktop machines for these users, complicating out rollout and future operations, we are considering adding servers with GPUs, and using HP's Remote Graphics to provide access from the thin client. The demo version appears to work acceptably, but there is a bit of a learning curve, it's not clear how well it would work for multiple simultaneous sessions, and it's not clear if it would be a good solution to apply to non-3D sessions. If possible, as with the hardware, we want to deploy a single software solution instead of a mishmash. If anyone has had experience managing a large installation of HP Remote Graphics, I would appreciate any feedback you can provide.

    Read the article

  • Two users using the same same user profile while not in a domain.

    - by Scott Chamberlain
    I have a windows server 2003 acting as a terminal server, this computer is not a member of any domain. We demo our product on the server by creating a user account. The person logs in uses the demo for a few weeks and when they are done we delete the user account. However every time we do this it creates a new folder in C:\Documents and Settings\. I know with domains you can have many users point at one profile and make it read only so all changes are dumped afterwords, but is there a way to do that when the machine is not on a domain? I would really like it if I didn't have to remote in and clean up the folders every time.

    Read the article

  • ESXi - Should failover node be in the same geographic location?

    - by Ryan
    For some reason it seems to me that at least one failover should be in the same building. But really I have no idea. Could there be an issue with routing delays for users during a failure? I'm just imagining reasons at this point. Let me know, should at least one failover node be at the same geographic location as the other? I am trying to prevent what appears to be a poor decision so any feedback or life experience you can share would be grand. Will mostly be running Windows Server 2008 with SQL Server 2008 as our guest OS.

    Read the article

  • User-unique .vimrc file for servers as root user

    - by Scott
    I'm getting thrown into an IDE war at the office, where multiple users have root access on our servers, and like to have everything their own way with VIM. Unfortunately, we have our servers locked down enough to where if you want to do anything, you need to have root access. Obviously (although this is obviously frowned upon), we get tired of typing sudo before each command we type, which would require that we constantly type in our wonderfully complex passwords that are mandated on us over and over again, so naturally we all just execute the sudo su - command upon login to avoid all of this. Of course, when it comes to VIM and custom .vimrc files, we are often times stepping on someone else's custom .vimrc file, and we have some whacked out functionality in these files that users have that may overwrite functionality that we have no idea about, much less have the patience to learn either. When as root on a linux box, is there any way for all of us to still maintain our .vimrc file without having to overwrite the file over and over again every time someone wants to use VIM? Ideally, we have many virtual machines all with VIM installed, so a universal solution across all servers would be best, and we do have our Microsoft Windows user specific home directories mounted on the servers under /home/username. Any recommendations for accommodating this?

    Read the article

  • how to change the default open-with program to a program on the second disk

    - by Scott????
    I have a 250GB HDD for my system and a 60GB SSD using a SATA port. I installed most of my applications on the SSD. There's a strange thing though. I can not change the default open-with program to a program which is on the SSD. I think it may be caused by permission so I gave my user a 'full control' permission on the security tab in disk properties. But changing permissions is not work. After I choose an application (I've tried Notepad++, Sublime, 7Zip, etc.), nothing is added in the below window: Also, if I install 7Zip on my machine, the right click menu items can not be added.

    Read the article

  • update nokia app installed via ovi

    - by Ryan Fernandes
    I've installed a version of a very handy application (Nokia Battery Monitor 1.1) and was quite pleased to see a v1.2 out recently. The problem is that I cant seem to update this app on my phone via the ovi app; the 'download' link is disabled. Also tried the 'sw update' app, but it reports that all applications are up-to-date. Any idea how do this without installing/reinstalling the app? The phone model is Nokia 5800

    Read the article

  • What's a good(affordable) business router that can limit traffic to certain machines(ips/ports)?

    - by Ryan Detzel
    We are using the basic Verizon router but it sucks so we're looking for a new one that allows us to limit users and our hadoop cluster to certain limits. Our problem is one person can start downloading something and kill the network and every hour we download logs into our cluster but it floods the network unless we rate limit it. Ideally we want to be able to say: total: 35 mbps Hadoop Cluster (15 mbps) Phones (1 mbps) Office(25 people) (19mbps but no one machine can have more than 5mbps)

    Read the article

  • Why would searching in Websphere Portal Administration exclude some results?

    - by Scott Leis
    I have logged into a website based on Websphere Portal 6.0, gone to the Administration console, then to Portlet Management - Portlets. At a guess, there are about 200 portlets on this server. 22 portlets have a title starting with "IGM", but if I use the "Title starts with" search option and enter "igm" (or any case variation), only one of these portlets is found. The portlets excluded from the "Title starts with" search are also excluded from the "Title contains" search, but some of them can be found with the "Unique name contains" search (noting that only 5 of these have a unique name). Why would title searches exclude these portlets? I also see similar behaviour in other areas of the Portal administration. E.g. going to Portal Settings - Custom Unique Names - Pages, and performing title searches excludes some results, depending on the search terms entered.

    Read the article

  • Find users that are auto forwarding / redirecting their email in Exchange 2010 using Powershell

    - by Ryan H
    We are using Live@edu, which is essentially hosted exchange server with some additional features and limitations to work around, and I'm trying to find everybody that is forwarding or redirecting emails from their accounts. I am trying to remove old accounts that have not been used, but we have instructions for users on redirecting emails, so we should expect that some users are indeed redirecting their emails, which will make their last login/logoff times not reflect whether they are indeed using auto forwarding or auto redirecting rules. How could I find a list of users with forwarding or redirection rules using Exchange 2010 Powershell Cmdlets? /EDIT: It may be sufficient for my purposes to find whether there are ANY server side rules, regardless of whether the rule forwards/redirects or does some other action.

    Read the article

  • Install PHP 5.1.2, Requires: libcurl.so.3()(64bit) error

    - by Scott Rowley
    I'm trying to install php 5.1.2 on a CentOS 6 server (for grandfathering in old websites). I downloaded an RPM file ( php-5.1.2-5.x86_64.rpm ), but when I use: yum install php-5.1.2-5.x86_64.rpm I get the following error: Error: Package: php-5.1.2-5.x86_64 (/php-5.1.2-5.x86_64) Requires: libcurl.so.3()(64bit) I have tried several things including the following: ln -s /usr/lib64/libcurl.so.4 /usr/lib64/libcurl.so.3 (To make it symlink to the newer version) Downloaded curl-7.15.5-2.1.el5_3.5.x86_64.rpm and took the libcurl.so.3 out of the rpm and placed it in /usr/lib64/libcurl.so.3 with the same permissions as libcurl.so.4. Nothing has worked. Any ideas?

    Read the article

  • tomcat 6 start mode setting for production

    - by Ryan Fernandes
    Tomcat 6 (as a windows service) seems to have a 'Start Mode' with options of 'java, jvm or exe' which can be set via the Tomcat Monitor (system tray icon). if I set this to 'java', I can see a forked 'java.exe' process for tomcat, if I chose either of the other two, I dont see a separate process. Anyway, would like to know if anyone has any information about what these settings mean and which one would be most appropriate in production.

    Read the article

  • Error on error log

    - by Ryan Murphy
    I am trying to use zend framework 2, i follow these instructions on centos6 via ssh. http://framework.zend.com/manual/2.0/en/user-guide/skeleton-application.html and when trying to start my website up, it gives an error, i go to the error log and i get this. [Sun Jun 30 16:02:17 2013] [error] [client 109.217.190.75] SoftException in Application.cpp:357: UID of script "/home/mydomain/public_html/public/index.php" is smaller than min_uid [Sun Jun 30 16:02:17 2013] [error] [client 109.217.190.75] Premature end of script headers: index.php What do they mean, how I fix them?

    Read the article

  • What is the best way to back up dedicated web server? (Amanda versus Rsync)

    - by Scott
    Hello everyone, I am trying to establish valid back ups for my web server. It is a linux box on CentOS. I have asked around and "rsync" was suggested by some of the server fault community. However, my coworker at work says that this is really only moving over the physical files and isn't really a usable "snapshot." He suggested using "amanda" and that this did full server snapshots that are more what I am accustomed to. I know at my company we have virtual machines that we take snapshots of and we can restore everything back to just as they were with little effort and little downtime. Is this possible with rsync? Or would I need to create a new server and then migrate the files back and do various configurations? I think I prefer being able to just reset everything to a point in time. Forgive my ignorance, Back ups are something that I have never really had to worry about before.

    Read the article

  • backuppc - how to backup remote (over the internet) clients?

    - by Scott
    I am testing out backuppc, which works great so far backing up windows clients on a LAN via SMB (no backup client/agent required). However I have quite a few laptops and desktops that are in various remote locations - some of which move around. I need some way to have that remote computer create an outgoing connection for backup purposes (Windows XP/7). I know backuppc supports smb, rsync and 'tar', but I believe these are all connections going from the server TO the client. SO, I either need a way to vpn the client on a timed basis, or it would be a lot better if the client could some how connect to the server (ssh?) and initiate it's own backup somehow (rsync?). Of course this all needs to be pre-installed by me and require no maintenance by the end user, no dialogs on their side. What do you think?

    Read the article

  • load balancing two web servers each on two different isp's?

    - by Scott
    I have two ISP's that provide me hosting via apache / php / mysql. I am running drupal on them. On occasion the mysql server will go away (crash), so I was hoping to find a reasonable way to have a fail over, if server A SQL is down, all traffic is sent to server B. I know traditionally this is handled in DNS where a second alternate ip is given if there is a problem - or similar. But I do not have control over the isp, other than I can run php, perl and the usual apache stuff. Also, I have static ip's on each isp, and I can create dns entries (A/CNAME/TXT). So, I was hoping there might be a way for me to have a script that checks if drupal has a problem, and if so, somehow alter dns, or ? Or, any other ideas? (other than spending lots more $ on a better isp)

    Read the article

  • Windows Proxy Server advice

    - by Scott
    I have a webserver that currently has about 10 IP addresses. I have various clients that require a proxy server to route their internal traffic through. The load is not that great, so I'd like to have this ONE server act as a proxy server for 10 different clients, each client having their own unique IP on the server. The hardware is already setup, but I'm wondering what software solutions you guys recommend? I've looked at WinGate, Squid-Proxy, etc...but am pretty green with this. Maybe there's even a way to have Windows do this natively? I'm running Windows Server 2008, 32 bit.

    Read the article

  • CPU clock scales down so computer is unusable after switching to battery

    - by Ryan
    When I am plugged in my laptop runs great however when I unplug it and I'm on battery power my CPU clock speed scales down pretty much all the way. I know this is happening by monitoring the clock speed. When plugged in it will usually stay between 1000MHz and 3000MHz but when I unplug it it quickly scales down to less than 500MHz and will get as low as 100MHz and it will NEVER scale up at all on battery power. After I plug the power back in it will then begin operating normally in about a minute. I have tried setting the MIN and MAX CPU performance in power options to 100% and have tried messing around with cooling settings which seemed to be a problem with HP laptops. I have a Toshiba Satellite M500-ST6444 running Windows 7. The BIOS is up to date. I have tried two versions.

    Read the article

  • View scheduled recordings remotely

    - by Scott
    I have one computer which is equipped with a TV tuner card. The Recorded TV folder is shared so that other computers on the network can watch recordings. Now that I am upgrading my computers to Windows 7, they have Media Center, which makes for a nicer viewing experience than a Windows Explorer folder view. I have set up Media Center on my laptop to considering the Recorded TV folder on the tuner-equipped PC as an extended library location, so now I can view all the recordings from within Media Center on the laptop. To make this experience even better, is there a way to view the list of scheduled recordings on the tuner-equipped PC from within Media Center on my laptop?

    Read the article

  • Add entire 300 GB filesystem to Git Annex repository?

    - by Ryan Lester
    By default, I get an error that I have too many open files from the process. If I lift the limit manually, I get an error that I'm out of memory. For whatever reason, it seems that Git Annex in its current state is not optimised for this sort of task (adding thousands of files to a repository at once). As a possible solution, my next thought was to do something like: cd / find . -type d | git annex add --$NONRECURSIVELY find . -type f | git annex add # Need to add parent directories of each file first or adding files fails The problem with this solution is that there doesn't seem from the documentation to be a way to non-recursively add a directory in Git Annex. Is there something I'm missing or a workaround for this? If my proposed solution is a dead end, are there other ways that people have solved this problem?

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >