Search Results

Search found 7799 results on 312 pages for 'changing'.

Page 228/312 | < Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >

  • Windows Explorer and UAC: run elevated

    - by syneticon-dj
    I am profoundly annoyed by UAC and switch it off for my admin user wherever I can. Yet, there are situations where I can't - especially if those are machines not under my continuous administration. In this case, I am always challenged with the task of traversing directories using my administrative user via the Windows Explorer where regular users do not have "read" permissions. The possible two approaches to this problem so far: change the ACLs to the directory in question to include my user (Windows conveniently offers the Continue button in the "You don't currently have permissions to access this folder" dialog. This obviously sucks since more often than not I do not want to change ACLs but just look into the folder's contents use an elevated cmd.exe prompt along with a bunch of command line utilities - this usually takes a lot of time when browsing through large and / or complex directory structures What I would love to see would be a way to run Windows Explorer in elevated mode. I have yet to find out how to do so. But other suggestions solving this problem in an unobtrusive way without changing the entire system's configuration (and preferably without the need for downloading / installing anything) are very welcome, too. I have seen this post with a suggestion for altering HKCR - interesting, but it changes the behavior for all users, which I am not allowed to do in most situations. Also, some folks have suggested using UNC paths to access the folders - unfortunately this does not work when accessing the same machine (i.e. \\localhost\c$\path) as the "Administrators" group membership is still stripped from the token and a re-authentication (and thus the creation of a new token) would not happen when accessing localhost.

    Read the article

  • Time Machine is getting stuck at "Preparing to Back Up" and my Trash isn't emptying

    - by zarose
    I have encountered two separate problems, but I am putting them in the same question in case they are related. First, my Trash would not empty. It seems to be getting stuck on certain files, because I will reset my Macbook and some of the files will be deleted, and then if I remove a file or two at random, more can be deleted. Some of these files had strange characters in their names. I tried changing the names to single characters, but this did not help. Next, I attempted to backup my Macbook using Time Machine. I plugged in the HDD I've been using for this, but every time I try to start the backup, Time Machine gets stuck at "Preparing to Back Up". I definitely need to know how to fix the Time Machine problem, but I am curious how to solve the trash problem as well, and whether or not these problems are related. EDIT: Console.app logged the following this morning before I left on a trip. I did not bring the HDD with me. 6/5/12 7:41:28.312 AM com.apple.backupd: Starting standard backup 6/5/12 7:41:46.877 AM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 7:41:58.368 AM com.apple.backupd: Backup failed with error: 19 6/5/12 7:59:08.999 AM com.apple.backupd: Starting standard backup 6/5/12 7:59:10.187 AM com.apple.backupd: Backing up to: /Volumes/Seagate 3TB Mac/Backups.backupdb 6/5/12 7:59:13.308 AM com.apple.backupd: Event store UUIDs don't match for volume: Macintosh HD 6/5/12 7:59:13.331 AM com.apple.backupd: Event store UUIDs don't match for volume: Blank 6/5/12 7:59:13.683 AM com.apple.backupd: Deep event scan at path:/ reason:must scan subdirs|new event db| 6/5/12 8:23:31.807 AM com.apple.backupd: Backup canceled. 6/5/12 8:23:33.373 AM com.apple.backupd: Stopping backup to allow backup destination disk to be unmounted or ejected. 6/5/12 9:51:21.572 PM com.apple.backupd: Starting standard backup 6/5/12 9:51:22.515 PM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 9:51:32.741 PM com.apple.backupd: Backup failed with error: 19

    Read the article

  • Transfer disk contents *without* cloning tools

    - by Chris Cummins
    Is it possible to "clone" a disk which contains programs by performing a copy of all the disk contents (preserving file attributes) from source to destination disk, and unplugging the source disk and changing the drive letter of the destination disk to match that of the source? Context I have a two disk Windows 8 system with a system drive and a data drive. Recently, the data drive developed a number of bad sectors leading to IO errors. I have been sent a replacement drive so I simply need to clone the contents of this data drive onto the replacement. The drive contents include documents & media, user folders (My Documents and related), and some programs (games etc). Problem The problem is that the bad sectors on the source disk causes most disk cloning tools to fail with read errors. Attempted approaches include: Disk clone from live boot environment with Acronis True Image. Fails due to read errors. Disk clone from live boot environment with Clonezilla. Fails due to read errors. Disk clone using Roadkil's Unstoppable Copier. Fails due to hardware timeouts in the HDD (application hangs indefinitely). A straightforward copy from source to destination disk using FreeFileSync (preserving file attributes and metadata). This succeeds. So at the moment I have a replacement disk which contains all of the data from the original disk. Now all I need to is somehow get Windows to replace all references to the old disk to the new one. Is this possible by simply swapping the assigned drive letters? Any help would be greatly appreciated, thanks!

    Read the article

  • Booting an Asus EeePC from a LiveCD USB stick

    - by Bryan
    I have two identical Asus EeePC netbooks that are both installed with Ubuntu. One of them was sitting on the closet shelf and the battery went completely dead. When I charged the battery and tried to boot the it, I got the "No init found" error. In trying to follow the suggested way to fix it posted here, I used the Startup Disk Creator on my Ubuntu 11.10 desktop machine to create a USB stick with a bootable Ubuntu 11.10 live CD on it (the netbook doesn't have a CD drive). I plugged the USB stick into the netbook with the init issues, went into the BIOS and selected the USB stick as the 1st choice to boot from, and did a hard restart. It then just stuck at the flashing underscore. Not knowing why it wasn't working, I tried booting my working netbook from the USB stick. When I got into the BIOS on the working netbook, I noticed the description in the boot order section for the USB device was different. On the non-working netbook, the description was SWISSBIT (the name of the USB stick) but on the working netbook it was just "Rem. Drive". I also noticed on the working netbook there was an additional option under the bootable order section that allowed me to choose which hard drive to boot from. This section showed two hard drives, one of them being my USB stick. So, rather than changing the device boot order, I selected the USB stick as the hard drive to boot from first and it worked like a champ - I was able to boot into the LiveCD on the USB stick. Seems to me the working netbook is seeing the LiveCD USB stick as a hard drive, where-as the non-working netbook is seeing it as a plain ol' USB stick. The BIOS is the exact same version on both netbooks... any idea why it works on one and not on the other?

    Read the article

  • VPN into multiple LAN Subnets

    - by Rain
    I need to figure out a way to allow access to two LAN subnets on a SonicWall NSA 220 through the built-in SonicWall GlobalVPN server. I've Googled and tried everything I can think of, but nothing has worked. The SonicWall NSA management web interface is also very unorganized; I'm probably missing something simple/obvious. There are two networks, called Network A and Network B for simplicity, with two different subnets. A SonicWall NSA 220 is the router/firewall/DHCP Server for Network A, which is plugged into the X2 port. Some other router is the router/firewall/DHCP server for Network B. Both of these networks need to be managed through a VPN connection. I setup the X3 interface on the SonicWall to have a static IP in the Network B subnet and plugged it in. Network A and Network B should not be able to access each other, which appears the be the default configuration. I then configured and enabled VPN. The SonicWall currently has the X1 interface setup with a subnet of 192.168.1.0/24 with a DHCP Server enabled, although it is not plugged in. When I VPN into the SonicWall, I get an IP address supplied by the DHCP Server on the X1 interface and I can access Network A remotely although I do not have access to Network B. How can I allow access to both Network A and Network B to VPN clients although keep devices on Network B from accessing Network A and vice-versa. Is there some way to create a VPN-only subnet (something like 10.100.0.0/24) on the SonicWall that can access Network A and Network B without changing the current network configuration or allowing devices on both netorks "see" each other? How would I go about setting this up? Diagram of the network: (Hopefully this kind of helps) WAN1 WAN2 | | [ SonicWall NSA 220 ]-(X3)-----------------[ Router 2 ] | | (X2) 192.168.2.0/24 10.1.1.0/24 Any help would be greatly appriciated!

    Read the article

  • Windows 2008 R2 file share - any way to "lock it down" outside of a 3rd party app?

    - by TheCleaner
    I have a 3rd party app that "makes a call" to write files to a file share on our network using the currently logged in credentials of the Windows domain user. Meaning the 3rd party app doesn't pass the apps credentials but simply issues a behind the scenes copy command to take a source file specified and copy/move it to the destination "repository" on the file share. The basic premise is that it keeps revisions/approvals for Document Control (think svn/git I guess, similar to this question: Lock down Windows folder to only be updatable by SVN). This all works fine...but here's my issue: I need a way to lock down the file share from being accessed/modified outside of using the 3rd party app (meaning prevent explorer/word/excel/etc from getting to that share). I know I can do the following: make the share a hidden share ($) - this definitely helps. Most users would have zero clue on how to get to such a share. Solves probably 95% of my issue. go one step further and set the "Hidden" attribute on the folders in the hidden share - this would go a little further in that even if a user knows the path to the hidden share like \\server\hidden$ they still won't see folders in that share without changing their explorer options to "show hidden files/folder Any other ideas on how I can lock this down? The users still need modify rights to this share/folders since the 3rd party app relies on their Windows permissions to that location when copying the files into it. I can't really use 3rd party tools to password protect the folder/share without causing the 3rd party app functions to fail.

    Read the article

  • Failed none and iptables

    - by Michael
    The problem is that when I ssh to my host with putty and enter user name, after that the password prompt delays. Found this is directly related to my iptables and can solve by changing default policy to ACCEPT. If default INPUT policy is ACCEPT, then password prompt is coming immediately. Mar 13 00:05:01 server-ubuntu sshd[6154]: Connection from 192.168.0.10 port 26304 Mar 13 00:05:06 server-ubuntu sshd[6154]: Failed none for acid from 192.168.0.10 port 26304 ssh2 However, if default INPUT policy is DROP, I got slight delay in getting password prompt after I enter username Mar 13 00:07:12 server-ubuntu sshd[6177]: Connection from 192.168.0.10 port 26333 Mar 13 00:07:35 server-ubuntu sshd[6177]: Failed none for acid from 192.168.0.10 port 26333 ssh2 For the second case, I tried to set default policy for FORWARD and OUTPUT chains to ACCEPT, but it didn't help. The only rule in this case is: -A INPUT -i eth1 -m mac --mac-source 00:26:XX:XX:XX:XX -j ACCEPT 00:26:XX:XX:XX:XX is the mac address from which I am trying to ssh to server's LAN(eth1). I'm sure there has to be some rule, which I can use while default INPUT chain policy is DENY in order to get password prompt immediately. I realize that the error message in the log is something normal and part of some verification procedure.

    Read the article

  • Why doesn't NFS recognize a new UID?

    - by user76177
    I have two servers running RHEL6. I have root access to both. The main server, which I will refer to as server, is a database server. The application server, which I will refer to as client, mounts a directory from server via NFS. There is a user, appuser, on both client and server. However, appuser's UID on client is 502. appuser's UID on server is 506. Both users need read and write capability on the NFS share. To facilitate this, I made the share owned by appuser on server. Running id appuser on each yields: uid=506(appuser). Of course, client does not recognize that ownership, since appuser has a different id on client. So I did the following: Changed UID of user in /etc/passwd on client to be 506. Changed ownership of appuser's $HOME on client to be appuser again so that I could log in. Now, when I go to look at the NFS share from the client side, I see that it is owned by 502. 502 is the OLD id for appuser on client. I can't change ownership of the NFS share from client, since that is a volume that physically resides on server. I need to make sure that the NFS share shows ownership of appuser from both server and client. What step have I missed since changing the appuser id on client? NOTE: I have not rebooted client (or anything else.)

    Read the article

  • How can I totally flatten a PDF in Mac OS on the command line?

    - by Matthew Leingang
    I use Mac OS X Snow Leopard. I have a PDF with form fields, annotations, and stamps on it. I would like to freeze (or "flatten") that PDF so that the form fields can't be changed and the annotations/stamps are no longer editable. Since I actually have many of these PDFs, I want to do this automatically on the command line. Some things I've tried/considered, with their degree of success: Open in Preview and Print to File. This creates a totally flat PDF without changing the file size. The only way to automate seems to be to write a kludgy UI-based AppleScript, though, which I've been trying to avoid. Open in Acrobat Pro and use a JavaScript function to flatten. Again, not sure how to automate this on the command line. Use pdftk with the flatten option. But this only flattens form fields, not stamps and other annotations. Use cupsfilter which can create PDF from many file formats. Like pdftk this flattened only the form fields. Use cups-pdf to hook into the Mac's printserver and save a PDF file instead of print. I used the macports version. The resulting file is flat but huge. I tried this on an 8MB file; the flattened PDF was 358MB! Perhaps this can be combined with a ghostscript call as in Ubuntu Tip:Howto reduce PDF file size from command line. Any other suggestions would be appreciated.

    Read the article

  • dnsmasq Client TTL

    - by user548971
    I have a situation where my hosts file is constantly changing. Because of this I don't want clients to cache ip addresses resolved using the hosts file. Here is the command that starts dnsmasq for me: /usr/sbin/dnsmasq -K -R -y -Z -b -E -S 8.8.8.8 -l /tmp/dhcp.leases -r /tmp/resolv.conf.auto --stop-dns-rebind --rebind-localhost-ok --dhcp-range=lan,192.168.2.2,192.168.2.249,255.255.255.0,12h -2 eth0 In looking at this site: http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html I see that the -T option has this description: -T, --local-ttl=<time> When replying with information from /etc/hosts or the DHCP leases file dnsmasq by default sets the time-to-live field to zero, meaning that the requester should not itself cache the information. This is the correct thing to do in almost all situations. This option allows a time-to-live (in seconds) to be given for these replies. This will reduce the load on the server at the expense of clients using stale data under some circumstances. My command doesn't have the -T option. Do I need it or does dnsmasq default TTL to zero without it?

    Read the article

  • Apache displays error page half way through PHP page execution

    - by Shep
    I've just installed Zend Server Community Edition on a Windows Server 2003 box, however there's a bit of a problem with the display of a lot of our PHP pages. The code has previously running under the same version of PHP (5.3) on IIS without any issues. By the looks of things, Apache (installed as part of Zend Server) is erroring out during the rendering of the page when it comes across something it doesn't like in the PHP. Going through the code, I've been able to get past some of the problems by removing the error suppression operator (@) from function calls and by changing the format of some includes. However, I can't do this for the whole site! Weirdly, the error code is reported as "200 OK". The code snippet below shows how the Apache error HTML interrupts the regular HTML of the page. <p>Ma quande lingues coalesce, li grammatica del resultant lingue es plu simplic e regulari quam ti del coalescent lingues.</<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>200 OK</title> </head><body> <h1>OK</h1> <p>The server encountered an internal error or misconfiguration and was unable to complete your request.</p> <p>Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error.</p> <p>More information about this error may be available in the server error log The Apache error log doesn't offer any explanation for this, and I've exhausted my Googling skills, so any help would be greatly appreciated. Thanks.

    Read the article

  • Choice of filesystem for GNU/Linux on an SD card

    - by gspr
    Hi. I have am embedded ARM-based system running on an SD card. It's currently Debian GNU/Linux using ext3 as filesystem. As I'm about to reinstall the system, I started wondering about changing to a more flash-friendly filesystem. I've heard about JFFS2, YAFFS2 and LogFS, and they all seem suited to the job. Which one would you recommend? Also, I've heard there have been a lot of ext4 improvements to better suit SSD disks; am I to interpret that as running ext4 should be just fine? What do I need to think especially about in that case? I guess the usage of the system is important. But for the sake of generality, imagine it'll do standard desktop stuff (even though it is infact a small ARM-based system). Thanks for any replies. Edit: Wikipedia tells me (in a "citation needed" statement) that Removable flash memory cards and USB flash drives have built-in controllers to perform wear leveling and error correction so use of a specific flash file system does not add any benefit. Thus, I'm leaning towards sticking with an ext filesystem.

    Read the article

  • shared hosting with malware, .htaccess file gets modified every 2 hours or so

    - by apache
    I spent all day today chasing malware on the shared hosting for one of my clients. The issue is as follows: Every 2 hours or so .htaccess file and all other .htaccess files gets modified, on the top of the file these lines are added: IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_REFERER} ^.*(google|ask|yahoo|youtube|wikipedia|excite|altavista|msn|aol|goto|infoseek|lycos|search|bing|dogpile|facebook|twitter|live|myspace|linkedin|flickr)\.(.*) RewriteRule ^(.*)$ http://pasla-ghwoo.ru/rqpgfap?8 [R=301,L] </IfModule> and on the bottom: ErrorDocument 400 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 401 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 403 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 404 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 500 http://pasla-ghwoo.ru/rqpgfap?8 The main problem I'm not root on the server, and cannot sudo, as this is shared hosting with 100's of websites. Typical good commands like dmesg, lsof, dtrace, chattr and many others are not available to me as I'm not root. I can't find who is modifying .htaccess files, how do I get that info? My guess is some php script is changing that which is called from outside via command and control. This seems to relate to this: http://blog.unmaskparasites.com/2009/09/11/dynamic-dns-and-botnet-of-zombie-web-servers/ How do I find out who is modifying .htaccess files without being root?

    Read the article

  • Php5-fpm Crash if much visitors

    - by chillah
    I decided to change my OP to Nginx from Litespeed because i read much about the low resource that Nginx would cost. Im running a Wordpress site with 500 users online My www.conf in "/etc/php5/fpm/pool.d" pm.max_children = 200 pm.start_servers = 20 pm.min_spare_servers = 20 pm.max_spare_servers = 60 ;pm.process_idle_timeout = 10s; ;pm.max_requests = 1000 Since i changed that config to higher requests and more children the Site needs longer till it shows me a White blank page. If i restart the php5-fpm prozess then, the site is running fine again. The CPU usage is at 5% if the php5-fpm is crashed and if its running its about 20 to 90%! Heres my Nginx.conf: user www-data; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 3048; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; I already tried changing the settings, tryed all variants but nothing worked. The Mashine: Dualcore 4gb ram

    Read the article

  • Can't access site internally, but DNS works

    - by BloodyIron
    1) I have apache2 running a vhost for a website. 2) This apache2 instance is already successfuly setup for other websites on it to be accessible internally and externally. 3) I am using an internal bind9 server to resolve the new website's domain internally to the private IP. This bind9 server is not public facing, nor is it the master server on the internet. 4) The DNS internally resolves to the right IP. 5) Firefox reports "server not found". 6) I have copied the config almost identically to other configs that are known to work (adjusting for proper paths of course). In turn I have reloaded and restarted apache2 repeatedly. 7) I have an entry to forward .org .info .net alternative TLDs to .com in the vhost config for this domain, and my browser goes from .org to .com despite note #5. 8) /var/log/apache2/access.log shows when someone externally tries to access the site, but no activity is observed when someone tries to access internally. Changing the log level does not appear to improve the situation. 9) I am out of ideas, nothing appears to be wrong. Please help? To be explicit. Why is this new site unreachable internally? I would like to clarify on something, even though I have already outlined this. YES I know this system is in a private network. NO it is not going through a router. YES I am using an internal DNS server (bind9) to resolve, and YES it does resolve to the proper internal IP. YES other websites on the same server setup in the same way with internal resolution work right now and have done for a while. Everything for this domain is setup the same as the other working domains as far as I can tell. The other working domains are internally AND externally accessible. This domain I am working with is only currently externally accessible. When I go to it internally firefox tells me "Server not found".

    Read the article

  • 'The rpc server is unavailable' or 'access is denied' error when using Remote desktop Services Manager on Windows 7 (but mstsc.exe works!)

    - by tbone
    I am trying to connect to a Windows XP workstation from a Windows 7 Ultimate workstation using Remote Desktop Services Manager. I am able to do a Remote Desktop (mstsc.exe) session from the Win7 machine to the WinXP machine with no problem at all. When running the Remote Desktops Admin (tsmmc.msc) too on a Windows XP box, I can also connect with no problem. However, when I use the new Remote Desktop Services Manager on Windows 7 and try to connect, I get the error: "The rpc server is unavailable" What could cause this? Has there been some fundamental change in Remote Desktop Services Manager, does it connect in a different way somehow? Update #1 Turned off firewall on the Windows XP box and the "The rpc server is unavailable" error went away; so RDSM seems to be using an entirely new port/connection/service compared to mstsc.exe or the old Remote Desktops Admin tool. Now... after disabling the firewall, I get a new error: Access is Denied. After doing some googling, I found some articles discussing this; basically, the error is very misleading - the actual problem is, if either side of the connection has dual monitors, and they are not both Win7 Ultimate, then you cannot connect using Remote Desktop Services Manager...the reason is, by default it uses the /multimon switch, and this switch requires a certain level of Windows license - and, there seems to be no way of changing this default (if anyone knows of a way to change this default, please post an answer or comment!). Nice going Microsoft. http://social.technet.microsoft.com/Forums/en-US/windowsserver2008r2rds/thread/4d06278f-e0f4-4f8e-a8e1-3697ee967ef4 http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Windows/Windows_7/Q_26225743.html

    Read the article

  • SQL Server 2008 Replication: Snap Shot Agent timing out on a specific table

    - by Nai
    Hi all, I am going through the process of setting up a transactional replication on my set up. However, I am unable to generate a full snap shot of my database. I keep getting stuck at the same particular point of the process. From snapshot agent: [46%] The process is running and is waiting for a response from the server Other sources have mentioned increasing the timeout period but I do not think this is the problem as this occurs at around the 10 min mark, way within the default value of 1800s. So, I removed that article/table from the snapshot, and lo and behold, it works. So I was looking at the Schema of the table and in my identity column, the NOT FOR REPLICATION was set to TRUE. All my other tables have it set to FALSE. Could this be the source of the error? Also, I tried changing this to false via the GUI but got received a timeout error. Is there a script that allows me to change this? Thanks all.

    Read the article

  • Scenario - NTFS Symbolic Link or Junction?

    - by Unsigned
    Differences Absolute Relative File Directory UNC Symbolic link ? ? ? ? ? Junction ? x x ? x Scenario Let's assume we're creating a reparse point to create the redirect C:\SomeDir => D:\SomeDir Since this scenario only requires local, absolute paths, either a junction or symlink would work. In this situation, is there any advantage to using one or the other? Assume Windows 7 for the OS, disregarding backward-compatibility (prior to Vista, symlinks are not supported). Update I have found another difference. Symbolic Link - Link's permissions only affect delete/rename operations on the link itself, read/write access (to the target) is governed by the target's permissions Junction - Junction's permissions affect enumeration, revoking permissions on the junction will deny file listing through that junction, even if the target folder has more permissive ACLs The permissions make it interesting, as symlinks can allow legacy applications to access configuration files in UAC-restricted areas (such as %ProgramFiles%) without changing existing access permissions, by storing the files in a non-restricted location and creating symlinks in the restricted directory.

    Read the article

  • 403 Forbidden serving static files from VirtualBox shared folder with nginx (Ubuntu 10.04LTS guest, Windows 7 host)

    - by Chris Pratt
    I'm working on a local development VM and trying to test serving my site with gunicorn and nginx as a reverse proxy for static resources only. The site loads minus static resources with user nginx; in nginx.conf. Attempting to load a static resource individually reveals a 403 Forbidden error. For background. The static resources are in a shared folder under /media/sf_work. All files are owned by root:vboxsf (VirtualBox default). My user account on the system has been added to the vboxsf group, and I have full access to the shared folder. For comparison, I tried changing the nginx.conf user to my user account. In that scenario, the static files did load, but then the homepage itself gives a 403 Forbidden error. So, I then tried adding the nginx user to the vboxsf group, but then everything gives a 403 Forbidden error. After further investigation it seems that if the nginx.conf user is in any group, it results in a 403 Forbidden. Any idea what could possibly be going on here?

    Read the article

  • How to have LiveJournal delegate my OpenID to something else?

    - by T-Boy
    I understand that if I have full control over my domain, I can set it up so that I can delegate the task of authenticating to another OpenID service provider. The problem is, what I'd like to do is to get the LiveJournal server to pass the authentication to someone else, instead of having LJ doing it. Preferably what I'd like to do is get LiveJournal, when asked by a web site, say, "No, I don't do it anymore -- go to this address". The plan was that this address would then be in a domain I fully control, which then would pass it on to whichever service provider I choose. I don't even know if I've gotten my understanding of OpenID right, if all this shenanigans are necessary, if my question makes sense, or if it's even possible with a service provider like Livejournal. ETA: Doing a little more reading up, and examining the source for my LiveJournal user page, I note that this particular line in the file's <header> area: <link rel="openid.server" href="http://www.livejournal.com/openid/server.bml" /> I suspect that changing this will allow me to forward OpenID requests to whomever I wish, I think; so far so good. Now comes the hard part -- figuring out how to change all of that using LiveJournal's customization options, if that is at all possible (here's hoping I don't need to pay to get that functionality).

    Read the article

  • SSH Connection Error : No route to host

    - by dewbot
    There are three machines in this scenario: Desktop A : [email protected] Laptop A : [email protected] Machine B : [email protected] All the machines have Ubuntu 11.04 (Desktop A is a 64bit one) and have both openssh-server and openssh-client. Now when I try to connect Desktop A to Laptop A or vice-versa by ssh [email protected] I get an error as port 22: No route to host in both the cases. I own both the machines, now if I try same commands from my friend's machine, i.e. via Desktop B, I can access both my Laptop and Desktop. But if I try to access Desktop B from my Laptop or by Desktop I get port 22: Connection timed out I even tried changing ssh port no. in ssh_config file but no success. Note: that 'Laptop A' uses WiFi connection while 'Machine A' uses Ethernet Connection and 'Machine B' is on an entirely different network. Laptop A && Desktop A - Router/Nano_Rcvr provided to me by ISP. So to one Router two Machines are connected and can be accessed at the same time. here is my ifconfig output for both the machines :- Laptop wlan0 Link encap:Ethernet HWaddr X:X:X:X:00:bc inet addr:1.23.73.111 Bcast:1.23.95.255 Mask:255.255.224.0 inet6 addr: fe80::219:e3ff:fe04:bc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:108409 errors:0 dropped:0 overruns:0 frame:0 TX packets:82523 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:44974080 (44.9 MB) TX bytes:22973031 (22.9 MB) Desktop eth0 Link encap:Ethernet HWaddr X:X:X:X:c5:78 inet addr:1.23.68.209 Bcast:1.23.95.255 Mask:255.255.224.0 inet6 addr: fe80::227:eff:fe04:c578/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10380 errors:0 dropped:0 overruns:0 frame:0 TX packets:4509 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1790366 (1.7 MB) TX bytes:852877 (852.8 KB) Interrupt:43 Base address:0x2000

    Read the article

  • Multiple Rails apps on same subdomain?

    - by Derek
    I recently decided to try out Rails. When working with PHP, I simply had all of my PHP projects in the same directory. For example, I may have http://ubuntu/app1, http://ubuntu/app2, etc. I created a subdomain for Rails (http://ruby.ubuntu), installed Rails and Passenger and everything is working. However, I may be wrong, but it looks like I can only have one Rails app per subdomain? My VirtualHost is as follows: <VirtualHost *:80> ServerName ruby.ubuntu ServerAdmin webmaster@localhost DocumentRoot /var/www/ruby/blog/public <Directory /var/www/ruby/blog/public> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all RailsEnv development </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> All of my PHP and misc. files are stored in /var/www/main. I want to be able to store all of my Rails apps in /var/www/ruby. I tried changing DocumentRoot to /var/www/ruby, but I don't think it's as simple as that. When I browse to a Rails app's Welcome Aboard page and click on "About my application's environment," I get a 404 page, but when the DocumentRoot is set to the public directory, I get the expected result. I don't want to have to create a new subdomain every time I create a new project. Is there any way I can make it so I can store all of my apps in /var/www/ruby, and browsing to http://ruby.ubuntu will let me access all of my Rails apps there? That way if I want to create a new app, all I have to do is rails new app, no Apache .htaccess or VirtualHost configuration required.

    Read the article

  • Can't access random web pages on my MacBook Pro 2012?

    - by Faruk Sahin
    Sometimes I can't access random web pages. The page simply doesn't load. If I wait for around a minute doing nothing, it will load. It happens very random and very intermittent. Sometimes it starts when I try to access youtube.com or cnn.com. When it starts, it happens once in a minute or once in 5 minutes for random web pages. But if I am downloading something, the download continues without any interruption. And also I am able to ping the address I can't browse. Then if I wait for around a minute, everything starts to work fine at the browser side also. I have tried a lot of different browsers. I have tried changing my DNS servers to Google's public DNS servers. Using a cable instead of the wireless connection doesn't work either. No one else in the network has this problem, but me. What can be the problem?

    Read the article

  • Serious 64-bit laptop

    - by Daniel Gehriger
    For the past couple of years, I have been using an IBM Thinkpad T60p for daily work (software development, desktop & embedded). I am extremely satisfied with this machine, due to its robustness. It also has a few features I depend on: a high resolution display: 15.0" TFT FlexView display with 1600x1200 (UXGA); excellent keyboard; decent graphics and CPU performance. Some of the software I develop benefits from larger amounts of RAM, and 3GB (Windows 7 32-bit) or 4GB (Windows 7 64-bit on T60p) are no longer sufficient. My customers run desktop computers with 20GB and more, and I need to have at least 8GB to at least be able to run reasonable test cases. So I'm shopping around for a new laptop, but I'm struggling to find anything that matches my requirements: must run Windows 7 64-bit Pro or higher; must support at least 8GB of RAM (more is better) high screen resolution! While I prefer 4:3 I can live with wide screen. But I really hope to find something with a vertical screen resolution similar to what I have now... portable, so < 16" but = 14" I realize that FlexView isn't available anymore, but I'd like to avoid a glossy screen if possible. decent (not more) graphics performance, ideally hybrid (I'm doing a lot of CAD, never games). good keyboard reasonable CPU -- but I'm still fine with my current Core 2 Duo, so that shouldn't be too complicated. The T60p fits all those requirements, except the 8GB of RAM. Can you help me find a current notebook that would match most of them? I don't mind changing brand. Thanks!

    Read the article

  • hosting company blocking google bots and crawlers [closed]

    - by Jayapal Chandran
    Hi, I am having a site for the past three years and it is very active for the past two years. Until not the site is working well and also now but not after the hosting company blocked google bots. Many pages appeared in the first page of the google search. After they started blocking i couldn't see my links in the first page instead they appeared after 5 pages or they did not appear at all. Will hosting companies be so stupid that they block and dont mention it to their users. They want to protect themselves by making the websites at stake. I display google ads and not this month i got only half for this 10 days. I have made requests to other hosting companies like blue host and monster host that i wan to transfer my domain by making a condition that the will not block google bots which stops the business indirectly. so any kind of help will be helpful. how can i claim what i lost from the hosting company. what other hosting companies consider the users (by informing the events like changing the IP or blocking google bot.) It was really working hard to bring up my site but these people just crashed down my site in a few days. :-(

    Read the article

< Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >