Search Results

Search found 11753 results on 471 pages for 'technical links'.

Page 387/471 | < Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >

  • MS Word TOC that references # pages rather than page number

    - by buttonsrtoys
    We frequently need to write specifications in Word which require a TOC that refers to the total number of pages in a section, rather than the page number. E.g., Section No. Pages 01010 Summary of Work..............5 01025 Prices.......................2 01400 Quality Control..............1 01700 Contract Close Out...........2 A wrinkle is that each section is a separate file. To date, we've been writing or TOC by hand, which has introduced every error imaginable. Is there an MS feature that populates a TOC with page totals? If not, I've done a little VB in Office, so wouldn't be opposed to that route as need be, as long as it was usable by our low tech users. Related question - all the section files are in the same folder. It would be nice if the TOC loaded every file in a folder, rather than having to specify each one. Is this a feature of Word or would this require VB? We tried a master document with links to subdocuments, but since the number of section files ebbs and flows with each project, the approach required too much maintenance for our Wordophobes.

    Read the article

  • How can use mod_rewrite to redirect a multiple specific URLs containing multiple query strings?

    - by Derek
    Hi there folks, we recently migrated a site from a custom CMS to drupal. In an effort to preserve some links that our users bookmarked (we have about 120 redirects) we would like to forward the original URLs to a new URL. I have been searching for a couple days, but can't seem to find anything simple to what I need. We have existing URLS that contain one or more query strings, for example: /article.php?issue_id=12&article_id=275 and we would like to forward to the new location: http://foobar.edu/content/super-happy-fun-article I started using: RewriteEngine On RewriteRule ^/article\.php?issue_id=12&article_id=275$ http://foobar.edu/content/super-happy-fun-article [R=301,L] This, however, does not work. A simple RewriteRule works: RewriteRule ^test\.php$ index.php It is unclear to me how I need to use {QUERY_STRING} with multiple Basically it's 120 simple redirects that go from one existing URL to a new one. I don't need ranges [0-9], because there is no sequential order to existing URLs. Perhaps I can do what I need with RewriteMap and a simple text file that contains a line like this: index.php?issue_id=12&articleType_section=0&articleType_id=65 http://foobar.edu/category/fall-2008 If anyone has any idea on using mod_rewrite to accomplish this or if there is a better, or more simple mod, I am open to that as well. Thanks!

    Read the article

  • Desktop appliciations are unable to launch my browser in Windows 8 [migrated]

    - by Alex Ford
    I have a fresh copy of Windows 8 Pro installed from MSDN. I have Google Chrome installed (stable channel) and it is set as my default browser. I even went into Control Panel Default Programs to ensure that Chrome had all its defaults. When other desktop applications try to launch my browser they always fail. For example, while trying to install the Android SDK for Windows the installer accurately detected that I did not have the JDK installed. It provides a friendly button to visit java.oracle.com. When pressing this button, nothing happens at all. You can see that here: http://youtu.be/XXL8GhuWWg0 If it were only that application that was having issues I wouldn't think anything of it but I have been encountering similar issues all over the place. Probably the most irritating one is when visual studio has updates; clicking the update button does nothing. http://www.youtube.com/watch?v=zwd1mn3TId0 You can see in that screencast that Visual Studio is not able to launch the browser no matter what I click. The update button doesn't do anything and neither do the two links in the update's description. Any suggestions? I'm assuming it's a Windows issue since it is happening in multiple applications.

    Read the article

  • How do I configure Jetty (via jettyrunner) so that it names a character set in the Content-Type response header?

    - by Pointy
    I use Jetty (via the oh-so-handy Jetty Runner) for day-to-day web application testing. One thing I've recently stumbled on is the fact that I don't get a character set called out in the "Content-Type" response header all the time. I do get it in response to my application's XMLHttpRequest transactions, but not for plain old pages loaded by <a> links or whatever. I've read a little bit about how to set up a Jetty config file, but I've never been able to completely understand that; all servlet containers are complicated, and while Jetty is pretty simple it's just weird enough that I don't grok the overall idea. Thus, all I do to launch my app is to run the Jetty Runner .jar file with a couple of simple arguments to set up the port number and logfile path, and then I just give it the .war file to run. It works great — except for the missing character set :-) Anybody have a quick sample config file that might fix this? edit — oh if it matters, I'm running Jetty 7.0.0 RC3; I've also tried with a slightly newer version (still 7.something) with exactly the same issue. All my testing is on Ubuntu.

    Read the article

  • Can you help me understand my SATA/RAID options?

    - by andrz_001
    I've a gigabyte GA-M720-US3 motherboard. Recently, I noticed the following during boot: IDE channel 0 Master (none) IDE channel 0 Slave (none) IDE channel 2 Master (my hdd) IDE channel 2 Slave (my dvd drive) IDE channel 3 Master (none) IDE channel 3 Slave (none) Of course, the same information is contained in the BIOS/CMOS. The HDD is connected to the mobo via a SATA(2?) cable at the port(?) labeled SATA2_0. The DVD drive is connected by a similar cable at SATA2_1. Why doesn't the information displayed during the boot and in BIOS reflect how I plugged the cables in? I mean, why "none" for channel 0 when there is something in SATA2_0. (or is that serious naivete on my part!?) Where's Channel 1 master and slave? Since these are SATA cables and not the IDE ribbons from a time ago, why the whole master/slave declaration during boot and in BIOS? Should my BIOS reflect the fact that these are SATA cables? I mean, in BIOS, should the "Onchip SATA mode IDE" be set to RAID or AHCI instead of IDE? Any replies, answers, suggestions, links, tips will be much appreciated. Thank you in advance!

    Read the article

  • Have a set a cgi scripts shared by multiple domains

    - by rpat
    Goal: Have multiple domains share a set of cgi(perl) scripts Environment: Apache 2.0 on a dedicated Cent OS server. (Apache configuration files generated by cPanel) I have dozens of domains on the dedicated server. The domains set up by cPanel under VirtualHost section. I have almost no knowledge of Apache. Most of what I do is taken care of by cPanel. I would like to put a set of scripts under one directory (perhaps under / or /opt ) and for each of the domains, under the individual cgi-bin, I would like to create a symbolic link to this common directory. This way I am hoping to avoid having to keep a copy of scripts for every domain. Since Apache config files are generated by cPanel, I would not like to manually make changes to those. Beside, I could mess things up. I see that cPanel recommends use of include files rather than changing the httpd.conf Perhaps I need to have the following of symbolic links enabled in the cgi-bin directory and allow the web server user execute the scripts not owned by it. May be I am making things more complicated than they are. I would be glad to use any other means to achieve my goal. Thanks in advance for your help. *I asked this on stackoverflow and some one suggested that I could ask this on serverfault.

    Read the article

  • How to make Firefox file associations consistent with Ubuntu file associations?

    - by wbharding
    This seems to be a pretty commonly Google question, but one for which there are no answers. http://www.linuxquestions.org/questions/linux-software-2/firefox-download-mime-types-378902 http://www.birkit.com/content/kubuntu-linux/internet/firefox/fix-file-associations-in-firefox.html Being three links amongst the many. The gist of what I want to accomplish is to have Firefox understand the file associations I download without me having to manually map all of them myself. Gnome knows the file extensions, so I would have expected that Firefox could just use the already-known file mappings there to open the right stuff (as I presume Chrome does). But it doesn't. At least not for me, using Firefox 4, and not by default. When I click on a downloaded file right now, Firefox always has to ask me what application should be used to open the file. A handful of Google results tell me that I can reassociate my file extensions by deleting ~/.mozilla/firefox/[profile name]/mimeTypes.rdf, but while deleting that file does in fact result in a new mimeTypes file being generated, the new mimeTypes is just as barren as the old one had been. Based on the amount of unanswered Qs on the Googlesphere, I know this is a very common problem for Ubuntu users, but it seems to be one for which nobody has chimed in with a good solution. Maybe Superuser can finally be the panacea for us all?

    Read the article

  • Which scripting language to use to asynchronously ssh into equipment, run several commands, parse the output, and save to a file on my computer?

    - by Fujin
    There are several points I'd like to stress in my question. I'd like to login by asynchronously ssh'ing into our infrastructure equipment. Meaning, I do not want to connect to only one device, do all the tasks I need, disconnect, then connect to the next device. I want to connect to several devices at once in order to make the process as fast as possible. By equipment I mean 'infrastructure equipment' and not servers. I say this because I will not have the luxury of saving files to the device then transferring them to myself with scp or another method. The output of the scripts that are run will have to be saved directly to my computer. The output of the commands that are run will need to be cleaned up and parsed. Also I want the outputs of each device to be combined into one nice and neat file, not a separate file for each device. This will all be done from a linux box, using ssh, into devices that all use linux'ish proprietary OSes. My guess is the answer to my question will either be a Bash, Perl, or Python script but I figured it wouldn't hurt to ask and to hear the reasons why one way is better than another. Thanks everyone. EXTRA CREDIT: With you answer, include links to resources that will help create the script I described in the language that you suggested.

    Read the article

  • Relinking a deleted file

    - by mbac32768
    Sometimes people delete files they shouldn't, a long-running process still has the file open, and recovering the data by catting /proc/<pid>/fd/N just isn't awesome enough. Awesome enough would be if you could "undo" the delete by running some magic option to ln that would let you re-link to the inode number (recovered through lsof). I can't find any Linux tools to do this, least with cursory Googling. What do you got, serverfault? EDIT1: The reason catting the file from /proc/<pid>/fd/N isn't awesome enough is because the process which still has the file open is still writing to it. A delete removes the reference to the inode from the filesystem namespace. What I want is a way of re-creating the reference. EDIT2: 'debugfs ln' works but the risk is too high since it frobs raw filesystem data. The recovered file is also crazy inconsistent. The link count is zero and I can't add links to it. I'm worse off this way since I can just use /proc/<pid>/fd/N to access the data without corrupting my fs.

    Read the article

  • Nginx Redirect when URL includes variable p=1

    - by ChrisD
    Need to write a small nginx rewrite line(s) to alter/301 redirect some URLs within our existing website. for example: www.example.com.au/pageone.html?p=1 to www.example.com.au/pageone.html www.example.com.au/pagetwo.html?dir=asc&limit=200&order=price&p=1 to www.example.com.au/pagetwo.html?dir=asc&limit=200&order=price www.example.com.au/pagethree.html?dir=dsc&limit=100&order=price&p=1 to www.example.com.au/pagethree.html?dir=dsc&limit=100&order=price As you can see p=1 has been stripped from the URL's (as it is superfluous but has been live on the site and needs to be redirected now) - all http and https links. Basically if, and only if, p=1 is used anywhere within the URL then it should redirect to the same URL without the p=1. This should also let p=11, p=12 through as normal (and not redirect), as it is not specifically p=1. # # # # If that is not possible then, I'd like to know how to redirect this kind of URL as a standalone one off: www.example.com.au/pageone.html?p=1 to www.example.com.au/pageone.html I tried several redirects but they were all pointless and did not work, and was not able to get it working. To be honest I do not really know where to start with this - I am new to nginx.

    Read the article

  • New to server admin, Diagnosing Memory and CPU issues on DV

    - by G Thompson
    Sorry for my ignorance and lack of knowledge. I'm a PHP/Front-end developer just now venturing into very minor server management/diagnostics. I have a Media Temple DV account. I have 2 sites that run a PHP script through a subscription service to an API. Basically API hits site with said script. Script runs, gathers data from api, saves data to SQL database. I noticed that these sites seemed to causing memory overages on my server (not sure why). So I temporarily disabled them. The memory overage alerts stopped but my CPU still sits really high, like at 115% and above. I'm trying to diagnos this with tutorials and resources but just can't seem to find a solution. I'll attach screenshots(screenshots are without the PHP scripts I assume are responsible for the memory issues) I'm assuming are important to the diagnosis, but if anyone can point me in the right direction to start A. figuring out if and why the PHP script may be causing memory overages and B. Why my CPU is always over 100%. Thanks guys! Links to screen shots... can't post with low points. http://i.stack.imgur.com/A64k4.png http://i.stack.imgur.com/qm1rV.png

    Read the article

  • File descriptor linked to socket or pipe in proc

    - by primero
    i have a question regarding the file descriptors and their linkage in the proc file system. I've observed that if i list the file descriptors of a certain process from proc ls -la /proc/1234/fd i get the following output: lr-x------ 1 root root 64 Sep 13 07:12 0 -> /dev/null l-wx------ 1 root root 64 Sep 13 07:12 1 -> /dev/null l-wx------ 1 root root 64 Sep 13 07:12 2 -> /dev/null lr-x------ 1 root root 64 Sep 13 07:12 3 -> pipe:[2744159739] l-wx------ 1 root root 64 Sep 13 07:12 4 -> pipe:[2744159739] lrwx------ 1 root root 64 Sep 13 07:12 5 -> socket:[2744160313] lrwx------ 1 root root 64 Sep 13 07:12 6 -> /var/lib/log/some.log I get the meaning of a file descriptor and i understand from my example the file descriptors 0 1 2 and 6, they are tied to physical resources on my computer, and also i guess 5 is connected to some resource on the network(because of the socket), but what i don't understand is the meaning of the numbers in the brackets. Do the point to some property of the resource? Also why are some of the links broken? And lastly as long as I asked a question already :) what is pipe?

    Read the article

  • Proxypass issue in apache

    - by user116992
    I am reverse proxying one site to another site. It's working nicely, but on some links it displays the original link. Here is my configuration Original Site "mysite.com" Second site configuration which is proxypassed on "mysite.com" <VirtualHost *:80> ServerName test.mysite.com ServerAlias www.test.mysite.com test.mysite.com ErrorLog /var/log/apache2/test.mysite_log.log TransferLog /var/log/apache2/test.mysite-access_log.log LogLevel info LogFormat "%h %l %u %t \"%r\" %>s %b %T" common ProxyPass / http://mysite.com/ ProxyPass / http://mysite.com/ ProxyPassReverse / http://mysite.com/ </VirtualHost> Now, everything is working well, but the problem is that when I go to some specific link it redirects me to original link. For Example, there are two sections on my page: "about-us" and "inquiry". When I click on "about-us" it takes me to "http://test.mysite.com/about-us" which is ok When I click on "inquiry" it takes me to "http://mysite.com/inquiry" which is not correct it must be "http://test.mysite.com/inquiry". I think I've missed some thing to add in configuration file but I can't figure it out.

    Read the article

  • Apache, Permissions, and Convenience

    - by Mike
    I'm on Mac OSX and i I have apache2 installed via MacPorts, running as the _www user. I have some files I want to serve in the /Users/Me/Documents/abc folder. Right now, though, the permissions of /Users/Me/Documents are 700. So, _www can't get in, even if abc is chmod 777. I recognize the following options: Allow _www access to my Documents folder. Put the files I want to share outside of my Documents folder. Hard-link the files outside of my Documents folder, and point apache to the hard links. None of these solutions are acceptable to me, however. I don't feel safe allowing _www access to my entire Documents folder. I really want to keep the files in my Documents folder for other reasons. The files are changing all the time, so hard-linking would not always reflect the right file structure, and, as I understand it, you can't hard-link a directory (though, if you could, that would solve it). Any ideas for a solution? Is there a way to run a few httpd processes as my user account so it can get in there? Or, is there some way to hard-link a directory, or some way to get httpd to follow a symlink past a directory that is 700 not owned by _www? Thanks!

    Read the article

  • Efficient mirroring of directories using hardlinks

    - by zoqaeski
    I'm backing up my music collection on to a number of NTFS-formatted external hard-drives; however, as I store my main collection in FLAC and have my library on my laptop as MP3s to save space, I want to be able to back up both sets, because mass conversion between formats is time-consuming. The "music" directory can contain any format; the "mp3s" directory contains only MP3s converted from files in the "music" directory. The music collection on the laptop contains only MP3s, but they come from both sources. When I backup my laptop's library to the "mp3s" directory, I want to only copy across MP3 files that don't exist in the "music" directory; those that do should be hard-linked to the "music" directory. All directories have an identical hierarchy, sorted by artist, album, date, discnumber if applicable, etc, and I use a tagging editor to ensure consistency across all these locations. I'm also using a Linux computer, but keeping the music collections on NTFS-formatted partitions so that they are readable by both Linux and Windows. At the moment, I use the following command to perform the backups, but this is time-consuming due to the expensive nature of finding hard links. rsync -avu --progress --relative --ignore-existing --link-dest=../music/ **/*.mp3 /media/ntfspocket/mp3s Is there a way to perform this backup more efficiently, taking advantage of the directory hierarchy?

    Read the article

  • Only receiving one document at a time from new web server.

    - by Robert Kuykendall
    We're trying to move our internal ticketing system from a Microsoft Small Business Server in the server closet to a Rackspace Cloud Server. The install is Fedora 11 LAMP, and should be default out of the box, except for the vhosts appended to the bottom of the httpd.conf. The new server is suffering from crippling load times, and watching the page load in Firebug it's easy to see the problem occurring, but I can't figure out the cause. Here is the [old server] (http://rkuykendall.com/uploads/old.server.png). I was expecting something like this, but a little slower since it was no longer hosted locally. Instead, the [new server] (http://rkuykendall.com/uploads/new.server.png) appears to only serve one file at a time. Here's another example of this [staircase load time effect] (http://rkuykendall.com/uploads/staircase.png) and another very clear example of the [staircase effect] (http://rkuykendall.com/uploads/staircase2.png). I talked to some guys on Freenode #httpd with no luck. I created a duplicate server to play with, and also created a fresh server with Fedora Core 13 and moved over just the database and web files with no luck. Any suggestions? ( image links disabled due to n00b-spam-restrictions )

    Read the article

  • Missing Memory on Windows Server 2008

    - by Chris Lively
    I have a windows 2008 x64 server with 8GB of RAM installed. Task Manager and Resource Monitor both insist that 7.5GB of the RAM is in use. However, the memory list under Processes (Memory Private Bytes) doesn't add up. I do have Show Processes from all users checked and hand adding the numbers I come up with about 3.5GB of RAM. I also looked at the latest copy of SysInternals Process Explorer. And neither the Private Bytes or Working Set adds up to more than about 3.5GB of RAM in use. What's going on? ===== Update: I bounced the server to see what would happen with the memory utilization. After boot and regular operations began it sat at 3GB of RAM usage. 18 hours later, it's back up to 6.8GB of usage with no indication as to where the additional 3.5GB or so of RAM is being used. Here are links to screen shots of the resource monitor and task manager: Resource Monitor Task Manager Update 2: Well, I believe I located the problem. When I detached one of the larger databases from my sql server the amount of ram shown as "in use" dropped drastically. The Memory Private Bytes count barely moved. So I'm guessing that SQL server has some way of allocating memory where it doesn't really show up in any of the monitors. I went further and created a new database file, then transferred all of the data from the one I detached. Even though it has the same data, and the same transactions going through it, the memory in use has stayed low. Maybe there was some corruption in the DB? I'll leave it to the DB gods and go searching for another "problem" ;)

    Read the article

  • Nginx Removes the index.php from URL

    - by codeHead
    I have a codeigniter php application on nginx. It works as expected on Apache but after moving to nginx, I noticed that the index.php is automatically removed from the URL in all my links. Infact when I try using index.php it does not go to the desired URL but gets redirected to my default controller. below is a coopy of my nginx.conf file. server{ listen 80; server_name mydomainname.com; root /var/www/domain/current; # index index.php; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log main; location / { # Check if a file or directory index file exists, else route it to index.php. try_files $uri $uri/ /index.php ; } location ~* \.php { fastcgi_pass backend; include fastcgi.conf; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_read_timeout 500; #fastcgi_param SCRIPT_FILENAME $document_root/index.php; add_header Expires "Thu, 01 Jan 1970 00:00:01 GMT"; add_header Cache-Control "no-cache, no-store, private, proxy-revalidate, must-revalidate, post-check=0, pre-check=0"; add_header Pragma no-cache; add_header X-Served-By $hostname; } location ~* ^.+\.(css|js)$ { expires 7d; add_header Pragma public; add_header Cache-Control "public"; } # set expiration of assets to MAX for caching location ~* \.(ico|gif|jpe?g|png)(\?[0-9]+)?$ { expires max; log_not_found on; } } I need to use my URL With the index.php -- please help.

    Read the article

  • IIS 6.0 https not working "connection was reset"

    - by cad
    Application Server Windows Server 2003 SP2 with IIS 6.0 IIS has a "Default Web Site" (port 18000, ssl 443, ID=1) with a certificate created by me. I have an specific site called "scj.galaxy.Weekly" (port 80, ssl 443, ID=1272369728) that is working fine. I have an entry in windows/system32/drivers/etc/hosts that links galaxy.Weekly.scjdev.ds to the server ip in both my local machine and in the application Server. These sites works: http://scj.galaxy.weekly/test.html works http://scj.galaxy.weekly/test.aspx works But https://scj.galaxy.weekly/test.html fails Error message is: The connection was reset The connection to the server was reset while the page was loading. The certificate was working fine for months. It was created with something similar to this: Selfssl /N:CN=*.scjdev.ds /V:3650 /S:1 /P:443 I have tried several options and none of them are working: 1) Create a certificate only in "Default Web Site" and link it to SecureBindings with command prompt cscript adsutil.vbs set /w3svc/1272369728/SecureBindings ":443:galaxy.Weekly.scjdev.ds" 2) Create a certificate only in "Galaxy Site" and link it to SecureBindings 3) Create a certificate in both and link them to secureBindings. Probably I am missing an step or something, but I can't see it. Here is the relevant config of Galaxy Site: <IIsWebServer Location ="/LM/W3SVC/1272369729" AuthFlags="0" LogPluginClsid="{FF160663-DE82-11CF-BC0A-00AA006111E0}" SSLCertHash="c36a514a0be90fbc121d9c19bb052842289d5aee" SSLStoreName="MY" SecureBindings=":443:galaxy.Weekly.scjdev.ds" ServerAutoStart="TRUE" ServerBindings=":80:galaxy.Weekly.scjdev.ds" ServerComment="galaxy.Weekly.scjdev.ds" > </IIsWebServer> <IIsWebVirtualDir Location ="/LM/W3SVC/1272369729/root" AccessFlags="AccessRead | AccessScript" AppFriendlyName="Default Application" AppIsolated="2" AppRoot="/LM/W3SVC/1272369729/Root" AuthFlags="AuthAnonymous | AuthNTLM" DefaultDoc="Default.aspx" DirBrowseFlags="EnableDirBrowsing | DirBrowseShowDate | DirBrowseShowTime | DirBrowseShowSize | DirBrowseShowExtension | DirBrowseShowLongDate" Path="D:\Webs\Galaxysite" ScriptMaps="some config... " > </IIsWebVirtualDir>

    Read the article

  • Glassfish and SSL [closed]

    - by Richard
    I'm struggling to get SSL working on Glassfish 3.1.1. I've been following tutorials like http://javadude.wordpress.com/2010/04/06/getting-started-with-glassfish-v3-and-ssl/ and SO posts like this Issues with setting up SSL on Glassfish v3 The above links are for information only. I've summarised what I've done below. As far as I can tell I'm doing everything correctly but I'm getting this error: SSL configuration is invalid due to No available certificate or key corresponds to the SSL cipher suites which are enabled Some background of what I have done: My cert is from GoDaddy. I generated the CSR from a new keystore (keystore.jks), then imported the resulting certs back into the same keystore and set the keystore password to the same pwd as the GF master password. Then created a new SSL listener in GF and pointed it at my keystore file (which I copied into domains/domain1/config). Set the Nickname to the alias of my cert (which is something liem 'mydomain.org' i.e. the name that I get when I run keytool -list. In my ciphers section in the network listeners page, I leave the defaults in place (empty, which means all ciphers are available I think). In domain.xml I've replaced all instances of s1as to 'mydomain.org'. This is the question: What exactly is causing the error highlighted? I'm guessing it's a mismatch between my listener config and aliases in my keystore, or something similar, but I'm not really sure what. Thanks

    Read the article

  • Is my current htaccess setting hurting SEO?

    - by user656002
    I have a site that I have redirecting to https. I do this to leverage wildcard SSL for my password protected pages. Everything seems to work fine with testing. For example, whether you type in http or www, you always get redirected to the SSL https... That said, I have about 200-300 external backlinks -- many high quality, yet google webmaster (along with SEOMoz), shows I have just 4... Huh? I'm embarrassed to say I just discovered this. This has led me to hypothesize that maybe my settings in htaccess is messed up, so google isn't recognizing a link because it's recorded on another site as http, instead of https. Maybe? At any rate, here is my simple htaccess setting for 301 www to http (The https redirect must be done inside the virtual host file--I think). I don't have anything in the htaccess file for https RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] Like I said, everything works fine for redirect over https, so I'd rather not screw up what works. On the other hand something is very wrong with google finding all my back links, so I need to fix something... I'm just wondering that maybe google isn't picking up a my backlinks from other websites recording me as http because I'm at https. Maybe google doesn't care and it's some other issue. Am I barking up the right tree? If so any quick fixes? Thanks as always!

    Read the article

  • (Some) security perms in WinXP corrupted (shows GUID instead of username)

    - by Andy
    I've been using my Win XP machine (part of a domain) over the holiday period, so until yesterday it hadn't rebooted for about five days. I used it yesterday perfectly fine and shut it down. When I switched it on this morning the majority (but not all) of my shortcut links in the Quick Launch toolbar showed as generic file icons. If you open the folder and get properties on one of the failing shortcuts it says ''Target type: This is not a valid shortcut''. Then in Outlook I noticed my signature wasn't showing (I checked my sent folder and the sig was ok yesterday). Checking the signature folder, I can't see the security tab on any of the sig files, and I have an access denied message on trying to open them. I can see the security tab on the signature folder itself, just none of the contents. If I try and use the parent folder's security tab and ''Replace permission entries on all child objects with entries shown here that apply to child objects'' it appears to work fine, but makes no actual difference. I logged in as administrator and saw that the owner of the files showed up as a GUID (clearly should've been my account in its place). Any ideas what might have made that happen? So far I haven't heard any similar complaints from anyone else at the office...

    Read the article

  • Folder name in sidebar is changed, how to get it back?

    - by otakustay
    My OSX is Simplified Chinese, so the user folders in finder's sidebar is like this: AirDrop ???? ?? ?? ?? ?? ?? ?? These stand for AirDrop, Applications, Desktop, Documents, Downloads, Movies, Musics and Pictures. However, now with an accident, my "Downloads" folders in sidebar is changed to its original english name, and I don't why, so my sidebar is now look like: AirDrop ???? ?? ?? Downloads ?? ?? ?? I know do symbol link to these folders would cause such case but I cannot find any symbol links to my ~/Downloads folder. So is there any way to get the Chinese name back? it looks so weired and dirty. I have a time machine backup 3 days ago, but I have quite a lot modifications to my user folder these 3 days, so time machine may not be a good choice.

    Read the article

  • Website and file/directory permissions

    - by mathiass
    I've been given a task to fix this one website. One of its issues is that on one page, the images have broken links - the images are not showing, and clicking on the image (i.e. direct link to the image file) results in a 403 (Forbidden) error. I am looking for some feedback on what could be the possible cause. The directory where the images are stored has the following permissions: drwxrws--- www "group" 10240 Aug 2008 "image directory name" I had to hide the names. I checked the page source code, and everything seems to be in place. The rest of the site, and other images outside that image directory are showing fine. I was told that recently there have been some changes to the server. I'm trying to assume that there is no fault in the source code, and the permissions are - or used to be - correct (since the site has been working before, and no recent changes to the site itself have been made). My only thoughts at the moment is that either: a) the directory permission should be: drwxrws--x (executable) for the other users, or b) there is a change in the server settings that I don't know of. Is there anything else I should check?

    Read the article

  • How To Completely Move Users/Program Files/Program Files (x86)/ProgramData (Folders) To Another Partition(s) On Windows 8?

    - by Enigma83
    I am attempting to move folders Users Program Files Program Files (x86), ProgramData (at the root of the C drive) to at least 2 other partitions, preferably on a fresh install. I have read that there are methods for doing this post-install, but it seems like it would be a bit more tedious to do things that way. I want to move the 2 Program Files folders to another partition on the same HDD, and Users/ProgramData will go to yet another partition on same HDD. I have done a bit of research on this, read up on some things that involved booting into Audit Mode, using the RoboCopy command to copy folders via booting into my Windows 8 USB drive, creating NTFS junctions/symbolic links, Registry edits, as well as accomplishing this automatically by creating an auto-attend file which Windows Setup processes automatically before the user is ever booted in for the 1st time. I tried this morning and now have a basic installation in which programs like Internet Explorer fail to open, certain files can't be found/opened (even if I click on them directly), an example is Regedit. Also, I can't run the Command/DOS (CMD) prompt as Administrator (or otherwise, as any other user), can't activate the real Administrator account or open any of the Administrative Tools (despite having added them to my Start Screen). So far I have only tried RoboCopy-ing Program Files and Program Files (x86) so far, creating junction points for them, and editing the Registry in the relevant locations. This is what I'm left with now. I also found the following blog article which describes how to do this for Windows 7 So, where should I go from here and where can I find more information? And how can this be done without disabling the Metro apps, which I've read will stop working if you move ProgramData. Once I have everything moved, where do I install programs to? Do I tell them to install to C:\Program Files\Program Files (x86) or to the junctioned/symbolic-linked partition/drive? I plan to test in VMware virtual machines from here on until things are working correctly, while using a baseline default install for daily tasks.

    Read the article

< Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >