Search Results

Search found 33009 results on 1321 pages for 'google index'.

Page 573/1321 | < Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >

  • Generate metadata of all files in a dir?

    - by nmuntz
    We are working on a project that is quite big, and its stored in an SVN repository under different folders with many files all over the place. Quite often, it is hard to locate the document that has a certain keyword or phrase. Does anyone know of any program that will generate and index the metadata of all the files that are in these documentation folders? (most filetypes are: xls, doc, ppt). Windows Search and Google Desktop could be an option but that would generally index the whole hard drive, emails, etc and thats probably much more than what we need and would not be suited for something more folder specific. Example of what im looking for: a program or webpage where i enter "John Doe" and it will show me all files in MyProjectFolder/ that contain the keyword "John Doe". This of course will already be indexed somewhere so searches should be almost instantaneous. Is there such a tool or i am asking too much? Thanks in advance!

    Read the article

  • 500 internal server error

    - by Rockr
    I am facing 500.0 Internal server quite frequently with my website. The error details are given below. HTTP Error 500.0 - Internal Server Error C:\PHP\php-cgi.exe - The FastCGI process exceeded configured activity timeout Module FastCgiModule Notification ExecuteRequestHandler Handler PHP_via_FastCGI Error Code 0x80070102 Requested URL http://mydomain.com:80/index.php Physical Path C:\HostingSpaces\coderefl\mydomain.com\wwwroot\index.php Logon Method Anonymous Logon User Anonymous When I contacted the support team, they're saying that my site is making heavy SQL Queries. I am not sure how to debug this. But my site is very small and the database is optimized. I'm running wordpress as platform. How to resolve this issue?

    Read the article

  • Redirect rss feed users

    - by Jeremy Love
    I made a redirect but when I subscribe to it, it doesnt get the feed from my new url it gets the one from my old url heres what I have. <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] RewriteCond %(REQUEST_URI) ^/articles$ [NC] RewriteRule ^(.*)$ htp://newsite.mysite.com/articles [R=301,L] RewriteCond %(REQUEST_URI) /(.) RewriteRule ^(.*)$ htp://newsite.mysite.com [R=302] RewriteCond %{HTTP_HOST} ^www\.oldsite.mysite\.com$ RewriteRule ^(.*)$ http://newsite.mysite.com [R=301,L] Redirect 301 / http://newsite.mysite.com/ </IfModule> any help is greatly appreciated, also do to me having no points i had to rename 2 of the urls to htp instead of http

    Read the article

  • Reccomendation for tuning 100's of SQL Databases

    - by wayne
    I'm running several SQL servers, each running a few hundred multi-gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite a lot from database to database. What would be the best way to auto-index/profile/tune this large amount of databases? As there are at least 600 or more catalogs I cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine.

    Read the article

  • Backup Picasa 'people' tags data

    - by pelms
    OK, so I've spent a fair amount of time putting names to faces in Picasa 3.5 but in a few days (hopefully) my copy of Windows 7 should arrive and I'll need to reinstall Windows. So, does anyone know what I need to backup so that I don't have to re-enter all those name tags? N.B. I'm on Windows 7 RC and know that I don't have to do a clean reinstall but I would prefer to. Outcome: I clean installed Windows 7 and downloaded and installed Picasa. Unfortunately, the download link on the UK Picasa homepage still pointed to Picasa 3.0 (rather than 3.5) which doesn't have face recognition. This scanned my photos folders and overwrote the picasa.ini files along with the people information   :¬( Fortunately I'd backed up the photos before installing Win 7, so after uninstalling Picasa 3.0 (along with it's database), restoring the photos from backup and installing Picasa 3.5, I finally got my face names back. Extra... Google has now posted advice on how to migrate to Windows 7 and keep your Picasa database, meaning that it will not need to rescan you photos and will retain all information about then including name tags. They have a method for upgrading and for a clean install of Win 7. Basically you need to back up: "C:\Users\%username%\AppData\Local\Google\Picasa2" and "C:\Users\%username%\AppData\Local\Google\Picasa2Albums"

    Read the article

  • Web server replica not working in other server

    - by user761076
    I have a Drupal installation (php+mysql) in a server, and I'm trying to copy this installation to another server with the same configuration, same physical and virtual path, same db configuration, etc. The thing is, in my new server I get the homepage to work, but not the inner pages, so I guess has something to do with rewrite (mod_rewrite is installed) (both .htaccess are the same). When I access http://localhost/myweb/content/mypage I get a 404 or a "Forbidden" if I uncomment this in httpd.conf (original httpd.conf does not have this entry): <Directory path/to/docs"> DirectoryIndex index.php index.html Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any clue? Thank you

    Read the article

  • Why is 'grep -i' so slow? How to do it faster for ASCII?

    - by Vi.
    Consider: $ time lzop -d < tvtropes-index.lzo | egrep -B 5 '[Dd][eE][sS][cC][eE][nN][dD] ?[Ff][rR][oO][mM]' real 0m0.438s $ time lzop -d < tvtropes-index.lzo | egrep -B 5 'descend ?from' -i real 0m11.294s Both search case insensitively. Why is the -i version so slow? How do I make grep -i fast without entering things like [iI][nN] [tT][hH][iI][sS] [wW][aA][Yy]? For example, perl -ne 'print if /descend ?from/i' works fast, but '-B 5' is not as trivial to implement as in grep (as well as other options).

    Read the article

  • How to force or redirect to SSL in nginx?

    - by Callmeed
    I have a signup page on a subdomain like: https://signup.mysite.com It should only be accessible via HTTPS but I'm worried people might somehow stumble upon it via HTTP and get a 404. My html/server block in nginx looks like this: html { server { listen 443; server_name signup.mysite.com; ssl on; ssl_certificate /path/to/my/cert; ssl_certificate_key /path/to/my/key; ssl_session_timeout 30m; location / { root /path/to/my/rails/app/public; index index.html; passenger_enabled on; } } } What can I add so that people who go to http://signup.mysite.com get redirected to https://signup.mysite.com ? (FYI I know there are Rails plugins that can force SSL but was hoping to avoid that)

    Read the article

  • Thousands of visits a day from untraceable traffic to website - Serious issue

    - by kel
    At the end of January we noticed a spike in traffic to what JetPack stats says was home/archive page and what Google was classifying as going to /gaming/ which is an archive list in WordPress. This started off as ~3,000 unique visitors and jumped up to 65,000 unique visitors in one day, again all to the "home" page. This happened over a course of a couple of weeks and we thought we were getting attacked. The traffic then dropped off for a few days but then came back but came back as only about ~15,000 uniques a day and has been like that every day since. We came to the conclusion that something wasn't tracking right somewhere and this is legitimate traffic and brushed it off. Now here comes the problem, Google AdSense has just disabled our account for "invalid clicks". We are trying to figure out where this traffic is coming from and stop it if it's not legitimate or figure out a way to track it correctly. Specs for the site: Dedicated server running CentOS 6 with nginx, php-fpm and MySQL. The site is built in WordPress and we use CloudFlare and W3 Total Cache. Analytics being used are Google Analytics, Quantcast, Alexa and Compete. Any kind of help would be awesome. UPDATE: I'm finding more people with the same type of problem and there doesn't seem to be a solution. http://netmeg.com/bot-attack/ http://stkywll.com/2012/03/02/annoying-cyborgs-attach-distort-analytics/ After looking at the access logs I noticed they were all CloudFlare IP's. I looked into that and found out CloudFlare acts as a proxy and there was a way to fix the logs in nginx. They are coming from many different ISP's in the US. They are going to /games/ or /gaming/ (/games/ redirects to /gaming/) and all seem to have the same user agent of Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0).

    Read the article

  • Nginx not working properly on subdomains

    - by javipas
    I've been trying to setup a Sugar CRM instance. I've got a domain that has its main site on a server (www.domain.com) and I've created a subdomain (sugar.domain.com), but I wnat this subdomain to be hosted on another server. This second server has nginx installed, and there's a working WordPress blog there on a virtualhost, so I would need to setup a second site. To do this I've created the directory structure, and I've created a /etc/nginx/sites-enabled/sugar.domain.com configuration file that has the following: * server { listen 80; server_name sugar.domain.com *.domain.com; access_log /var/www/sugar/log/access.log; error_log /var/www/sugar/log/error.log info; location / { root /var/www/sugar; index index.php; } location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sugar/$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort on; fastcgi_read_timeout 180; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } upstream backend { server 127.0.0.1:9000; } As far as I know, I need the *.domain.com parameter on the "server_name" flag, but something is crashing here: I get either a 403 Forbidden error, or I get PHP code (I can read the PHP file code in the browser, like normal text) that somehow is not executed. I've tried setting permissions to 755 inside the /var/www/sugar/ directory, and I've also set up the owner:group with a chown -R www-data:www-data /var/www/sugar/ The thing is, I don't now if my mistake is in the nginx site configuration, in my folder permissions, or in other place :( Could it be because of the main domain (www.domain.com) is hosted on other server? Do they have to be together necessarily?

    Read the article

  • Does nginx auth_basic work over HTTPS?

    - by monde_
    I've been trying to setup a password protected directory in a SSL website as follows: /etc/nginx/sites-available/default server { listen 443: ssl on; ssl_certificate /usr/certs/server.crt; ssl_certificate_key /usr/certs/server.key; server_name server1.example.com; root /var/www/example.com/htdocs/; index index.html; location /secure/ { auth_basic "Restricted"; auth_basic_user_file /var/www/example.com/.htpasswd; } } The problem is when I try to access the URL https://server1.example.com/secure/, I get a "404: Not Found" error page. My error.log shows the following error: 011/11/26 03:09:06 [error] 10913#0: *1 no user/password was provided for basic authentication, client: 192.168.0.24, server: server1.example.com, request: "GET /secure/ HTTP/1.1", host: "server1.example.com" However, I was able to setup password protected directories for a normal HTTP virtual host without any problems. Is it a problem with the config or something else?

    Read the article

  • IIS6 can't find site on local network

    - by chezy525
    I have a windows 2003 server with dual NICs running IIS6. I can access everything remotely, but the internal network can't seem to find the site regardless of which IP Address I try to go to. There are really several weird things that are happening here, but I'm going to limit this question to what I'm guessing to be the simplest problem (the solution to which I'm hoping solves other things as well): From the server itself, I can access the webpage using the primary IP address (i.e. http://192.168.1.2/index.htm), but not using the secondary IP address (i.e. http://10.10.10.2/index.htm). Self pinging both IP addresses works, and the "Web site identification" in IIS has the IP address set to "(All Unassigned)"... which I believe should bind both IP addresses to this site. I apologize if I'm not providing enough details about my setup, but at this point I don't even know what's relevant...

    Read the article

  • 500 internal server error php long running process

    - by Sabirul Mostofa
    I am trying to run a long php process and it ends with the 500 internal server error. It executes fine for about 8 mins. I have rebooted the machine after changing the php settings. PHP Config: max_execution_time: 3600 After around 10 mins ps ax|grep php: 19007 ? S 0:08 /usr/bin/php /home/gypsy/public_html/index.php I have set the ignore_user_abort true. The process gets stuck at 00:08 min and isn't executed further. Apache error log shows the error: Script timed out before returning headers: index.php It seems somehow the max_execution_time isn't working. Any suggestion would be a great help.

    Read the article

  • Searching Netapp Network Share in Windows 7

    - by user121270
    Windows 7 famously does not do what its predecessor, Windows XP, did very well, index and search network drives! Sometimes, the logic of MS isd absolutely baffling. That siad, I am trying to find some solution to the issue, which is made more complicated by the fact that we are using a Netapp FAS 2020 as a CIFS fileserver. I know some of the solutions to the Windows 7 search index issue revolve around having a Search Service installed on a Windows 2008 server and then adding that server sahre to the library on the Windows 7 workstation. Is it possible to accomplish this in any way with a CIFS share on a Netapp filer?

    Read the article

  • How do I troubleshoot a page not found error when configuring IIS6 Windows Server 2003? [Page Not Found]

    - by Vinicius Ottoni
    I have configured IIS6 in my windows server 2003 with this link: http://www.simongibson.com/intranet/iis6/ After that I create a new web site inside Web Sites directory. Inside the physical path I created an index.htm that has: <html> <body>Test</body> </html> But I got the following error: "The page cannot be found". When I put the same index file inside the Web Site Default physical path, it works. I configured the new web site with the link above using the IP configuration and without a Host Header.' What should I do to troubleshoot this or is there an obvious configuration error?

    Read the article

  • Is there a way to force Windows to recognize a network folder as a local drive, for the purposes of

    - by NoCatharsis
    I just started using the file search program Everything at work to search through documentation on our shared drives. This is after disappointments with Google Desktop and Windows Search. I love the speed of Everything, but I wish it were able to index other shared folders. My makeshift solution was to somehow force Windows to recognize the necessary shared folders as local drives, then add them to the index list. I have also considered using SyncToy, but this requires downloading all data to my drive, which could be terabytes of information - obviously not a good idea on a small company network. What would be the best solution here?

    Read the article

  • Nginx not working properly on subdomains [SOLVED]

    - by javipas
    I've been trying to setup a Sugar CRM instance. I've got a domain that has its main site on a server (www.domain.com) and I've created a subdomain (sugar.domain.com), but I wnat this subdomain to be hosted on another server. This second server has nginx installed, and there's a working WordPress blog there on a virtualhost, so I would need to setup a second site. To do this I've created the directory structure, and I've created a /etc/nginx/sites-enabled/sugar.domain.com configuration file that has the following: * server { listen 80; server_name sugar.domain.com *.domain.com; access_log /var/www/sugar/log/access.log; error_log /var/www/sugar/log/error.log info; location / { root /var/www/sugar; index index.php; } location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sugar/$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort on; fastcgi_read_timeout 180; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } upstream backend { server 127.0.0.1:9000; } As far as I know, I need the *.domain.com parameter on the "server_name" flag, but something is crashing here: I get either a 403 Forbidden error, or I get PHP code (I can read the PHP file code in the browser, like normal text) that somehow is not executed. I've tried setting permissions to 755 inside the /var/www/sugar/ directory, and I've also set up the owner:group with a chown -R www-data:www-data /var/www/sugar/ The thing is, I don't now if my mistake is in the nginx site configuration, in my folder permissions, or in other place :( Could it be because of the main domain (www.domain.com) is hosted on other server? Do they have to be together necessarily?

    Read the article

  • Can sendmail be configured to discard routed email that has been rejected by the next hop?

    - by Guy Bolton King
    Background: We have a handful of hosts (running sendmail) acting as the MXs for a few domains each. Each domain is handled via the sendmail/cf /etc/mail/virtusertable, with a set of known recipients and a catch-all reject rule. Mail to postmaster on each host is aliased to root, and root is aliased to root+<host>@ourdomain.com. The MX for ourdomain.com is Google Apps, and [email protected] is a simple group that forwards to the admins. Google Apps will reject some emails at the SMTP stage, usually because of illegal attachments (instead of accepting them and filing them as spam). Problem: Given a particular spam email sent to a domain in a virtusertable entry: If the recipient address rejects the mail, then sendmail will try and send a DSN to the sender. If that sender also rejects the mail (because it's a falsified sender, and the MX for the sender rejects the mail as spam), then sendmail sends a DSN to the postmaster. The routing detailed above takes place, and...Google Apps rejects the mail as well. sendmail now gives up with a "savemail panic", and leaves the mail in the queue forever. Our mail queue fills up with garbage Is there any way I can get sendmail to discard messages that have been rejected by the next virtusertable hop (i.e. after step 1 in the Problem description)? Or does anyone have any other solutions to this?

    Read the article

  • nginx rewrite for wikkawiki

    - by Hans
    Just setup WikkaWiki on my server, I have been trying to have the links go from wiki.mysite.info/wikka.php?wakka=Start into wiki.mysite.info/DotMG. I tried following their guide at http://docs.wikkawiki.org/ModRewrite, however it seems incomplete and outdated. Furthermore, as of version 1.3.2 base_url isn't even manually configurable from the wikka.config.php file. I am using version 1.3.2 of WikkaWiki. My nginx virtual hosts file contains: server { listen 80; server_name wiki.mysite.info; root /usr/share/nginx/wikka/; access_log /usr/share/nginx/.access/wikka; error_log /usr/share/nginx/.error/wikka error; location / { index index.php; try_files $uri $uri/ @wikka; } location @wikka { rewrite ^(.*/[^\./]*[^/])$ $1/ last; rewrite ^(.*)$ /wikka.php?wakka=$1 last; } location ~* \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } } Thus far it works, I can go to wiki.mysite.info/APage and it'll display that page, however it doesn't work on all pages, sometime the browser simply downloads the page (For some reason it always downloads the Start page). Also when I go to wiki.mysite.info/ it downloads the wikka.php file... Furthermore, the links on the wiki have the wikka.php?wakka= so whenever I navigate around the wiki, it goes back to being wiki.mysite.info/wikka.php?wakka=APage. I think something is wrong with my rewrite but I can't say for sure. Contents of the fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $server_https; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200;

    Read the article

  • Is there a local yubnub.org replacement?

    - by Justin Keogh
    I use yubnub very often... every google search I do by just (in firefox) "ctrl-t" - (now in the url bar) "y g searchterms" [Enter] "y" in this case is a search keyword I added by right clicking in the yubnub.org command box it's really fast, and I just do it automatically now... but the problem is now I am stuck with whatever the yubnub command that I am so used to using does. I cant change it... for example, what if I dont want to use google... but I still want to use the "g" command to search? or say I want to use google's https search... ect... I suppose this would be kinda trivial to implement locally... but I would hate to re-invent the code if it's allready done and in use... ideas? Also a local yubnub.org replacement would save me the DNS lookup and traffic to yubnub.org. I dont expect to be able to import all commands from yubnub.org but that would be cool if possible.

    Read the article

  • Is it possible to mod_rewrite BASED on the existence of a file/directory and uniqueID? [closed]

    - by JM4
    My site currently forces all non www. pages to use www. Ultimately, I am able to handle all unique subdomains and parse correctly but I am trying to achieve the following: (ideally with mod_rewrite): when a consumer visits www.site.com/john4, the server processes that request as: www.site.com?Agent=john4 Our requirements are: The URL should continue to show www.site.com/john4 even though it was redirected to www.site.com?index.php?Agent=john4 If a file (of any extension OR a directory) exists with the name, the entire process stops an it tries to pull that file instead: for example: www.site.com/file would pull up (www.site.com/file.php if file.php existed on the server. www.site.com/pages would go to www.site.com/pages/index.php if the pages directory exists). Thank you ahead of time. I am completely at a crapshot right now.

    Read the article

  • Is there a more elegant way to apply conditions in nginx?

    - by Ryan Detzel
    Is there a better way to do this? I can't find a way to nest or apply boolean operators to conditions in nginx. Basically if there is a cookie set(non-anonymous user) we want to hit the server. If the cookie is not set and the file exists we want to server the file otherwise hit the server. set $test "D"; if ($http_cookie ~* "session" ) { set $test "${test}C"; } if (-f $request_filename/index.html$is_args$args) { set $test "${test}F"; } if ($test = DF){ rewrite (.*)/ $1/index.html$is_args$args? break; } if ($test = DCF){ proxy_pass http://django; break; } if ($test = DC){ proxy_pass http://django; break; } if ($test = D){ proxy_pass http://django; break; }

    Read the article

  • NGINX SSL Certificate Not Working

    - by LeSamAdmin
    I've been working on SSL stuff and getting nowhere from like 4 tutorials... I've bought an SSL for pingrglobe.com, and now trying to apply it to my servers. Here's my nginx code: http { server { listen 80; server_name pingrglobe.com; rewrite ^(.*) http://www.pingrglobe.com$1 permanent; } server { listen 443; ssl on; ssl_certificate /etc/nginx/ssl/pingrglobe.crt; ssl_certificate_key /etc/nginx/ssl/pingrglobe.key; #enables SSLv3/TLSv1, but not SSLv2 which is weak and should no longer be used. ssl_protocols SSLv3 TLSv1; #Disables all weak ciphers ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM; server_name www.pingrglobe.com; root /var/www/pingrglobe.com; index index.html index.php; location / { try_files $uri $uri/ @extensionless-php; add_header Access-Control-Allow-Origin *; } rewrite ^/blog/blogpost/(.+)$ /blog/blogpost?post=$1 last; rewrite ^/viewticket/(.+)/(.*)$ /viewticket?tid=$1&$2 last; rewrite ^/vemail/(.+)$ /vemail?eid=$1 last; rewrite ^/serversettings/(.+)$ /serversettings?srvid=$1 last; rewrite ^/notification/(.+)$ /notification?id=$1 last; rewrite ^/viewreport/(.+)$ /viewreport?srvid=$1 last; rewrite ^/removeserver/(.+)$ /removeserver?srvid=$1 last; rewrite ^/staffviewticket/(.+)/(.*)$ /staffviewticket?tid=$1&$2 last; rewrite ^/activate/(.*)/(.*)/(.*)$ /activate?user=$1&code=$2&email=$3 last; rewrite ^/activate2/(.*)/(.*)/(.*)$ /activate2?user=$1&code=$2&email=$3 last; rewrite ^/passwordtoken/(.+)/(.*)/(.*)$ /passwordtoken?user=$1&token=$2&email=$3 last; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location @extensionless-php { rewrite ^(.*)$ $1.php last; } location ~ /\. { deny all; } } } SSL doesn't work as you see here: https://www.pingrglobe.com

    Read the article

  • How can I create matrices of data in Excel?

    - by sandeep
    I want to create a 4*4 matrix in excel 2007 by taking three or more columns or conditions for example Column index Row index Name 1 2 x 2 3 y 3 4 z 4 1 p this is how data looks and i want it for 1*1 cell as p and 1*2 cell as x and so on. and I want out put as follows matrix 1 2 3 4 1 p x y z 2 p x y z 3 p x y z 4 p x y z and I have very huge data like this some times the matrix size goes up to 60*60 also.

    Read the article

  • ftp connects but files aren't visible browsing

    - by YsoL8
    Hello If this should be on that other site, please don't shoot me, as I can't remember the name or the url. I have an ftp account in Dreamweaver that connects to the remote site and appears to be uploading files as normal. But when I browse to the location I can't see any new files or changes to the index page. (I've uploaded index.php and connect.php). I'm getting a 404 page. I suspect the host directory is wrong, but looking at the file tree, I can't see the folder I'm supposed to be using, so I'm uploading to the apparent site root. Any guidance on this?

    Read the article

< Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >