Search Results

Search found 39968 results on 1599 pages for 'access manager'.

Page 642/1599 | < Previous Page | 638 639 640 641 642 643 644 645 646 647 648 649  | Next Page >

  • Yet again: "This device can perform faster" (Samsung Galaxy Tab 2)

    - by Mike C
    I've been doing a lot of research with no reasonable solution. Please excuse the length of my post. When I plug my Galaxy Tab 2 (7" / Wi-Fi only / Android ICS) into my Windows 7 64-bit machine, I (almost always) get this warning popup that "This device can perform faster." And in fact, transfers onto the Tab in this mode are slow. The two times I've been able to get a high-speed connection, the transfer has occurred at the expected speed. I just don't know what to do to get that high-speed transfer. (The first time I did, it was the first time I connected the Tab; the second time I did, I was fiddling around and unplugging/plugging in again.) That popup is telling me that the device is USB2, but that it thinks I've connected to a USB1 port. In fact, every USB port (there are ten) on this system is USB2. It's an ASUS M3A78-EMH mobo from late 2008. I'm not sure what the chipset is; the CPU is an AMD Athlon 4850e, but I've seen this message reported for non-AMD systems. (Every mobo reference I've seen in reports on this has been for Asus, but of course most reporters aren't reporting that info at all.) The Windows 7 installation is just a couple weeks old (I had a disk crash) but I saw the same warning on the WinXP/64 that was installed previously. In Device Manager, there are two "Standard Enhanced PCI to USB Host Controller" nodes which are the actual high-speed controllers. There are also five "Standard OpenHCD USB Host Controller" nodes, which I have determined are virtual USB1 controllers embedded in the "Enhanced" controllers. (In Device Manager, I'm using View|Devices by Connection.) My high-speed thumb drives, external disks, and iPod all show up as subnodes of the "Enhanced" controllers; the keyboard, mouse, and USB speakers under the "OpenHCD" ones -- and this is true no matter which ports these devices are plugged into. The Tab shows up under an OpenHCD node, unsurprisingly. It appears as a threesome: a top-level "Mobile USB Composite device" with two subs: "Galaxy Tab 2" and "Mobile USB Modem." (I have no idea what the modem device implies or how I might use it, but I don't care about it either: I just want the Tab to reliably connect at high speed.) On the Tab, the USB support has a switch between PTP and MTP, the latter being the default, and the preferred mode for me (as I'm usually hooking it up for music synch). I have tried, however, connecting it as PTP, and it still connects as USB 1. (As PTP, only the "Galaxy Tab 2" device appears -- no Composite, no Modem.) If it's plugged in as MTP and I change the setting to PTP, Windows unloads and reloads the device, and voila: The Tab appears under an "Enhanced" node, but eventually re-loads again to show a exclamation icon on the device; Properties then shows "This device cannot start." Same response if I plug it in as PTP and then change to MTP; in this case, only the Tab itself shows the exclamation, not the other two devices. One thing I have not tried, and really would prefer to avoid, is installing the "beta" chipset driver available on the Asus website, which is dated 2009. Windows tells me it has the most up-to-date drivers for the Tab, and for the chipset, and I'm inclined to believe that. I suspect the problem is with the Samsung drivers, or possibly the hardware. One suggestion I saw elsewhere which might, possibly, pertain is to ensure the USB cable is properly shielded; however, the Tab has one of those misbegotten 30-pin, not-quite-an-iPod connectors; I don't know if I could find a 3rd party one. It seems unlikely that this cable is improperly shielded, tho. (Is there a way to test that?) So, my question is: does anyone know how to get this working as one might reasonably expect it to?

    Read the article

  • Single-Signon options for Exchange 2010

    - by freiheit
    We're working on a project to migrate employee email from Unix/open-source (courier IMAP, exim, squirrelmail, etc) to Exchange 2010, and trying to figure out options for single-signon for Outlook Web Access. So far all the options I've found are very ugly and "unsupportable", and may simply not work with Forefront. We already have JA-SIG CAS for token-based single-signon and Shibboleth for SAML. Users are directed to a simple in-house portal (a Perl CGI, really) that they use to sign in to most stuff. We have an HA OpenLDAP cluster that's already synchronized against another AD domain and will be synchronized with the AD domain Exchange will be using. CAS authenticates against LDAP. The portal authenticates against CAS. Shibboleth authenticates with CAS but pulls additional data from LDAP. We're moving in the direction of having web services authenticate against CAS or Shibboleth. (Students are already on SAML/Shibboleth authenticated Google Apps for Education) With Squirrelmail we have a horrible hack linked to from that portal page that authenticates against CAS, gets your original plaintext password (yes, I know, evil), and gives you an HTTP form pre-filled with all the necessary squirrelmail login details with javaScript onLoad stuff to immediately submit the form. Trying to find out exactly what is possible with Exchange/OWA seems to be difficult. "CAS" is both the acronym for our single-signon server and an Exchange component. From what I've been able to tell there's an addon for Exchange that does SAML, but only for federating things like free/busy calendar info, not authenticating users. Plus it costs additional money so there's no way to experiment with it to see if it can be coaxed into doing what we want. Our plans for the Exchange cluster involve Forefront Threat Management Gateway (the new ISA) in the DMZ front-ending the CAS servers. So, the real question: Has anybody managed to make Exchange authenticate with CAS (token-based single-signon) or SAML, or with something I can reasonably likely make authenticate with one of those (such as anything that will accept apache's authentication)? With Forefront? Failing that, anybody have some tips on convincing OWA Forms Based Authentication (FBA) into letting us somehow "pre-login" the user? (log in as them and pass back cookies to the user, or giving the user a pre-filled form that autosubmits like we do with squirrelmail). This is the least-favorite option for a number of reasons, but it would (just barely) satisfy our requirements. From what I hear from the guy implementing Forefront, we may have to set OWA to basic authentication and do forms in Forefront for authentication, so it's possible this isn't even possible. I did find CasOwa, but it only mentions Exchange 2007, looks kinda scary, and as near as I can tell is mostly the same OWA FBA hack I was considering slightly more integrated with the CAS server. It also didn't look like many people had had much success with it. And it may not work with Forefront. There's also "CASifying Outlook Web Access 2", but that one scares me, too, and involves setting up a complex proxy config, which seems more likely to break. And, again, doesn't look like it would work with Forefront. Am I missing something with Exchange SAML (OWA Federated whatchamacallit) where it is possible to configure to do user authentication and not just free/busy access authorization?

    Read the article

  • nginx - 403 Forbidden

    - by michell90
    I've trouble to get aliases working correctly on nginx. When i try to access the aliases, /pma and /mba (see secure.example.com.conf), i get a 403 Forbidden but the base url works correctly. I read a lot of posts but nothing helped, so here i am. Nginx and php-fpm are running as www-data:www-data and the permissions for the directories are set to: drwxrwsr-x+ 5 www-data www-data 4.0K Dec 5 22:48 ./ drwxr-xr-x. 3 root root 4.0K Dec 4 22:50 ../ drwxrwsr-x+ 2 www-data www-data 4.0K Dec 5 13:10 mda.example.com/ drwxrwsr-x+ 11 www-data www-data 4.0K Dec 5 10:34 pma.example.com/ drwxrwsr-x+ 3 www-data www-data 4.0K Dec 5 11:49 www.example.com/ lrwxrwxrwx. 1 www-data www-data 18 Dec 5 09:56 secure.example.com -> www.example.com/ Im sorry for the bulk, but i thought better too much than too little. Here are the configuration files: /etc/nginx/nginx.conf user www-data www-data; worker_processes 1; error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; include /etc/nginx/sites-enabled/*; } /etc/nginx/sites-enabled/secure.example.com server { listen 80; server_name secure.example.com; return 301 https://$host$request_uri; } server { listen 443; server_name secure.example.com; access_log /var/log/nginx/secure.example.com.access.log; error_log /var/log/nginx/secure.example.com.error.log; root /srv/http/secure.example.com; include /etc/nginx/ssl/secure.example.com.conf; include /etc/nginx/conf.d/index.conf; include /etc/nginx/conf.d/php-ssl.conf; autoindex off; location /pma/ { alias /srv/http/pma.example.com; } location /mda/ { alias /srv/http/mda.example.com; } } /etc/nginx/ssl/secure.example.com.conf ssl on; ssl_certificate /etc/nginx/ssl/secure.example.com.crt; ssl_certificate_key /etc/nginx/ssl/secure.example.com.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/index.conf index index.php index.html index.htm; /etc/nginx/conf.d/php-ssl.conf location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param HTTPS on; fastcgi_param SCRIPT_FILENAME $request_filename; include fastcgi_params; } /var/log/nginx/secure.example.com.error.log 2013/12/05 22:49:04 [error] 29291#0: *2 directory index of "/srv/http/pma.example.com" is forbidden, client: 176.199.78.88, server: secure.example.com, request: "GET /pma/ HTTP/1.1", host: "secure.example.com" EDIT: forgot to mention, i'm running CentOS 6.4 x86_64 and nginx 1.0.15 Thanks in advance!

    Read the article

  • NetBackup with VSS and Instant Recovery - Failing to delete old snapshots

    - by Jonathan Bourke
    We are attempting to implement Microsoft VSS for snap-shotting in our NetBackup 6.5.3.1 environment. The clients are both 32 & 64 bit Windows 2003 Server. Snapshot parameters are: Instant recovery is enabled Maximum snapshots = 1 Provider type = 1 (System) Snapshot attribute = 1 (Differential) All backups successfully complete, and VSS shadows are successfully created both for the snapshot backup and for the open files (shadow copy components). The Issue: NetBackup is not clearing or overwriting old snapshots with each successive backup. When we list shadows, and shadow storage, it is increasing and increasing. IT is not honouring the Maximum Snapshot setting. The Logs: The bpfis log doesn’t really appear to show any errors other than for methods which we are not employing (VxVM, Flashsnap, etc.). A section is as follows: 11:54:10.744 [348.4724] <2> logparams: D:\Program Files\Veritas\NetBackup\bin\bpfis.exe delete -nbu -id htpststr001.san.mgmt.det_1248918143 -bpstart_to 300 -bpend_to 300 -clnt htpststr001.san.mgmt.det 11:54:10.744 [348.4724] <4> bpfis: INF - BACKUP START 348 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS error 10; see following messages: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: Non-fatal method error was reported 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vfm_configure_fi_one: method: FlashSnap, type: FIM, function: FlashSnap_init 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS method error 3; see following message: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: FlashSnap_init: Veritas Volume Manager not installed. 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS error 10; see following messages: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: Non-fatal method error was reported 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vfm_configure_fi_one: method: vxvm, type: FIM, function: vxvm_init 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS method error 3; see following message: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vxvm_init: Veritas Volume Manager not installed. 11:54:11.713 [348.4724] <4> onlfi_thaw: Thawing C:\ using snapshot method VSS. 11:54:11.713 [348.4724] <2> onlfi_vfms_logf: vfm_thaw: delete snapshot ... 11:54:11.744 [348.4724] <2> onlfi_vfms_logf: snapshot services: emcclariionfi:Thu Jul 30 2009 11:54:11.744000 <Thread id - 4724> Unable to import any login credentials for any appliances. 11:54:11.760 [348.4724] <2> onlfi_vfms_logf: snapshot services: hpevafi:Thu Jul 30 2009 11:54:11.760000 <Thread id - 4724> CHpEvaPlugin::init: CLI tool is not installed. 11:54:11.760 [348.4724] <2> onlfi_vfms_logf: snapshot services: hpmsafi:Thu Jul 30 2009 11:54:11.760000 <Thread id - 4724> No array mangement credentials are available in configuration file. 11:54:13.806 [348.4724] <4> onlfi_thaw: do_thaw return value: 0 11:54:13.806 [348.4724] <4> onlfi_thaw: Thawing D:\ using snapshot method VSS. 11:54:15.806 [348.4724] <4> onlfi_thaw: do_thaw return value: 0 11:54:19.806 [348.4724] <2> fis_delete_id: removing D:\Program Files\Veritas\NetBackup\online_util\fi_cntl\bpfis.fim.htpststr001.san.mgmt.det_1248918143.0 11:54:19.806 [348.4724] <2> fis_delete_id: removing D:\Program Files\Veritas\NetBackup\online_util\fi_cntl\bpfis.fim.htpststr001.san.mgmt.det_1248918143.0.fiid 11:54:19.853 [348.4724] <4> bpfis: INF - EXIT STATUS 0: the requested operation was successfully completed The Question: Has anyone any experience of NetBackup / VSS not clearing snapshots after backups? We will ultimately be using a HP EVA for the snapshots, but we want to ensure correct functioning at a VSS level before we go further. Regards, Jonathan (PS: Question previously posted by my colleague on entsupport.symantec.com)

    Read the article

  • SQLExpress service unable to start Error code 17053

    - by Chris Sobolewski
    A user was instructed by their software support to upgrade a program and install SQLExpress as part of the installation process. Since that time, the service has been able to start, citing error 17053, which appears to be an authentication issue. Here is the error log: 2011-01-11 13:17:45.50 Server Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Feb 9 2007 22:47:07 Copyright (c) 1988-2005 Microsoft Corporation Express Edition on Windows NT 5.1 (Build 2600: Service Pack 2) 2011-01-11 13:17:45.50 Server (c) 2005 Microsoft Corporation. 2011-01-11 13:17:45.50 Server All rights reserved. 2011-01-11 13:17:45.50 Server Server process ID is 3332. 2011-01-11 13:17:45.50 Server Authentication mode is WINDOWS-ONLY. 2011-01-11 13:17:45.50 Server Logging SQL Server messages in file 'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\ERRORLOG'. 2011-01-11 13:17:45.52 Server This instance of SQL Server last reported using a process ID of 2332 at 11/10/2010 2:15:24 PM (local) 11/10/2010 7:15:24 PM (UTC). This is an informational message only; no user action is required. 2011-01-11 13:17:45.52 Server Error: 17053, Severity: 16, State: 1. 2011-01-11 13:17:45.52 Server UpdateUptimeRegKey: Operating system error 5(Access is denied.) encountered. 2011-01-11 13:17:45.52 Server Registry startup parameters: 2011-01-11 13:17:45.52 Server -d c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\master.mdf 2011-01-11 13:17:45.52 Server -e c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\ERRORLOG 2011-01-11 13:17:45.52 Server -l c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\mastlog.ldf 2011-01-11 13:17:45.52 Server Error: 17113, Severity: 16, State: 1. 2011-01-11 13:17:45.52 Server Error 3(The system cannot find the path specified.) occurred while opening file 'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\master.mdf' to obtain configuration information at startup. An invalid startup option might have caused the error. Verify your startup options, and correct or remove them if necessary. 2011-01-11 13:17:45.52 Server Error: 17053, Severity: 16, State: 1. 2011-01-11 13:17:45.52 Server UpdateUptimeRegKey: Operating system error 5(Access is denied.) encountered. 4 Server Error: 17053, Severity: 16, State: 1. 2011-01-11 13:08:21.34 Server UpdateUptimeRegKey: Operating system error 5(Access is denied.) encountered. 12:47:20.85 spid5s SQL Trace ID 1 was started by login "sa". 2011-01-11 12:47:20.90 spid5s Starting up database 'mssqlsystemresource'. 2011-01-11 12:47:20.93 spid5s The resource database build version is 9.00.3042. This is an informational message only. No user action is required. 2011-01-11 12:47:21.21 spid5s Error: 15466, Severity: 16, State: 1. 2011-01-11 12:47:21.21 spid5s An error occurred during decryption. 2011-01-11 12:47:21.38 spid8s Starting up database 'model'. 2011-01-11 12:47:21.38 Server Error: 17182, Severity: 16, State: 1. 2011-01-11 12:47:21.38 Server TDSSNIClient initialization failed with error 0x5, status code 0x90. 2011-01-11 12:47:21.38 Server Error: 17182, Severity: 16, State: 1. 2011-01-11 12:47:21.38 Server TDSSNIClient initialization failed with error 0x5, status code 0x1. 2011-01-11 12:47:21.38 Server Error: 17826, Severity: 18, State: 3. 2011-01-11 12:47:21.38 Server Could not start the network library because of an internal error in the network library. To determine the cause, review the errors immediately preceding this one in the error log. 2011-01-11 12:47:21.38 Server Error: 17120, Severity: 16, State: 1. 2011-01-11 12:47:21.38 Server SQL Server could not spawn FRunCM thread. Check the SQL Server error log and the Windows event logs for information about possible related problems. One lead I had was to change the SQL logon account from "Network Service" to "Local System". Unfortunately, that is resulting in the error message The Security ID Structure is Invalid [0x80070539] Any help either uninstalling or getting SQLExpress running would be fantastic.

    Read the article

  • Can I get advice on my nginx configuration (as a proxy in front of Jira and Confluence)?

    - by Nate
    I was wondering if I could get some advice on my nginx configuration. The config seems to be working, but I'm unsure if I'm doing everything properly. The basic idea is to have a Jira and Confluence server (in separate Tomcat instances) running on the same machine, with nginx in front to handle SSL for both. I want only SSL connections to be made to Jira/Confluence. Jira is running on 127.0.0.1:9090 and Confluence on 127.0.0.1:8080. Here is my nginx.conf, any advice or tips would be greatly appreciated. user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] $request ' '"$status" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; # Load config files from the /etc/nginx/conf.d directory include /etc/nginx/conf.d/*.conf; # Our self-signed cert ssl_certificate /etc/ssl/certs/fissl.crt; ssl_certificate_key /etc/ssl/private/fissl.key; # redirect non-ssl Confluence to ssl server { listen 80; server_name confluence.example.com; rewrite ^(.*) https://confluence.example.com$1 permanent; } # redirect non-ssl Jira to ssl server { listen 80; server_name jira.example.com; rewrite ^(.*) https://jira.example.com$1 permanent; } # # The Confluence server # server { listen 443; server_name confluence.example.com; ssl on; access_log /var/log/nginx/confluence.access.log main; error_log /var/log/nginx/confluence.error.log; location / { proxy_pass http://127.0.0.1:8080; proxy_set_header X-Forwarded-Proto https; proxy_set_header Host $http_host; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } # # The Jira server # server { listen 443; server_name jira.example.com; ssl on; access_log /var/log/nginx/jira.access.log main; error_log /var/log/nginx/jira.error.log; location / { proxy_pass http://127.0.0.1:9090/; proxy_set_header X-Forwarded-Proto https; proxy_set_header Host $http_host; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } }

    Read the article

  • Backup a hosted Sharepoint

    - by David Mackintosh
    One of my customers has outsourced their Sharepoint and Exchange services to a hosted services provider. I believe it is a Sharepoint 2007 service. It is a shared hosting solution, so we do not have any kind of access to the server itself; we only have user-level and sharepoint-administrator-level access to the Sharepoint application. They have come to the point where they would like to have a copy of everything that is on the Sharepoint server. I have downloaded the Office Sharepoint Designer 2007, and it features three (!) ways to backup a Sharepoint server, none (!) of which work for me: File-Export-Personal Web Package: When selecting everything, it calculates a negative size. Barfs with No "content-type" in CGI environment error. File-Export-Sharepoint Template: barfs with a A World Wide Web browser, such as Windows Internet Explorer, is required to use this feature error. Site-Administration-Backup Web Site: wants to create the backup .cmp file on the sharepoint server itself. I don't have access to any servers on the same network so I can't redirect it to any form of the suggested \\server\place. Barfs with a The Web application at $URL could not be found. [...] error. Possibly moot because Google tells me that bad things happen using OSD to back up sites larger than 24MB (which this site is most definitely). So I called the helpdesk of the outsource provider, and got told that they recommend using OSD, but no they don't actually provide any application support for OSD (not that I blame them for that), but they could do a stsadm.exe backup and provide us with that, and OSD should be able to read the resulting cmp file. Then for authorization reasons they had my customer call them directly (since I can't authorize such an operation), and they told him that he didn't want a stsadm.exe backup, he wanted to get into an 'explorer view' and deal with things that way (they were vague). Google hasn't been much help in figuring out what an 'explorer view' is, let alone how I bring one up. The end goal of this operation is to have a backup of the site as it exists (hopefully today, but shortly anyways) in such a format that we don't need another sharepoint server to restore it to. Ie we'd like to be able to pick individual content directly out of this backup. We are not excessively concerned with things like formatting. We just want the documents. This is a fairly complex site with multiple subsites and multiple folders per subsite, so sitting there and manually downloading each file isn't really going to happen if there is a better easier way. So, my questions: Is the stsadm.exe backup what I want? If not, what do I want? If I manage to convince them that I do want the stsadm.exe backup, can I pick files out of the resulting backup file with OSD? If OSD isn't going to let me extract individual files, is there a tool I can use that can?

    Read the article

  • Intel Rapid Storage / Smart Response SSD caching issue

    - by goober
    Background Recently built my own PC. It works! Almost. It's been a while since getting into the guts of these things, so I'm familiar but may be missing something simple. FYI, I don't care about blowing the OS away -- it's brand new and we can go back from scratch as many times as necessary. Goal / Issue I'd like to use the SSD to take advantage of Intel's Smart Response technology (allows the SSD to act as a cache for HDDs) I would like the SSD cache to act as a cache for my HDDs, which I would like to be in a RAID1 array (so I get the speed from the SSD and the redundancy from the RAID1) However, Windows only sees the drive in device manager (not as a drive), so I'm unsure what to do about that. Related: as far as I know, for this to work, the drives all have to be in a single RAID array (i.e. a RAID0 pairing of the SSD and the RAID1 HDD array). However, when attempting this at the BIOS level, I am told there is not enough space for an array. Steps so Far Moved the SSD onto the Intel controller (I'd had it on the Marvel 6.0 controller instead of the Intel controller, so the BIOS was only seeing it in a strange way) Updated the BIOS of the motherboard to the latest version Reinstalled Intel's RST (iRST?) software several times, as some forums reported it working after reinstalling 3 times (which does not inspire confidence). Checked Intel storage: it does see the SSD as a physical, non-RAID device. However, it says no space exists if I try to create an array. Checked the BIOS: it does not show up in the boot order, but is an option that can be selected under boot options. Tried the firmware update for that model. Issue: the firmware CD doesn't detect a drive; maybe the Intel storage controller is making it difficult? moved the ssd to the marvel controller. The firmware update cd appeared to hang while searching for drives. swapped out the SATA cable for the manufacturer's and moved back to the intel storage controller. Noticed at this point that in the Intel RST software, a device DOES show up in addition to the RAID set -- only shown as a "60 GB internal disk". Windows doesn't appear to see it as a drive, but it does still show in device manager. Move SSD to port from 0-3 on MOBO and set SATA mode to IDE (after disconnecting RAID1 config) to allow the firmware update to work. Firmware was already at the latest version. Next Steps ? Components involved ASUS P8Z68-V PRO motherboard (Intel Z68 Chipset) Intel i7 2600k Processor 2 x 1TB 7200 RPM HDDs 64 GB Crucial M4 SSD (M4-CT064M4SSD2) For Reference -- Storage Configuration Intel 3 gbps Intel 3gbps Intel 6gbps Marvel 6gbps +----------+ +----------+ +----------+ +----------+ | | <----+ | | +-+ | | | |----------| | |----------| |-|--------| |----------| | | | | + | | | | | | +----------+ | +--|-------+ +-|--------+ +----------+ | | | + v v | 1 TB HDD 64 GB SSD + +> 1 TB HDD For Reference -- Intel RST (v10.8.0.1003) Screenshot Don't mind the "rebuilding" -- knocked a power cable out at one point; it's doing its job, not an indicator of a bad HDD. Any thoughts? Thanks in advance for any help!

    Read the article

  • Hide subdomain AND subdirectory using mod_rewrite?

    - by Jeremy
    I am trying to hide a subdomain and subdirectory from users. I know it may be easier to use a virtual host but will that not change direct links pointing at our site? The site currently resides at http://mail.ctrc.sk.ca/cms/ I want www.ctrc.sk.ca and ctrc.sk.ca to access this folder but still display www.ctrc.sk.ca. If that makes any sense. Here is what our current .htaccess file looks like, we are using Joomla so there already a few rules set up. Help is appreciated. # Helicon ISAPI_Rewrite configuration file # Version 3.1.0.78 ## # @version $Id: htaccess.txt 14401 2010-01-26 14:10:00Z louis $ # @package Joomla # @copyright Copyright (C) 2005 - 2010 Open Source Matters. All rights reserved. # @license http://www.gnu.org/copyleft/gpl.html GNU/GPL # Joomla! is Free Software ## ##################################################### # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. # ##################################################### ## Can be commented out if causes errors, see notes above. #Options +FollowSymLinks # # mod_rewrite in use RewriteEngine On ########## Begin - Rewrite rules to block out some common exploits ## If you experience problems on your site block out the operations listed below ## This attempts to block the most common type of exploit `attempts` to Joomla! # ## Deny access to extension xml files (uncomment out to activate) #<Files ~ "\.xml$"> #Order allow,deny #Deny from all #Satisfy all #</Files> ## End of deny access to extension xml files RewriteCond %{QUERY_STRING} mosConfig_[a-zA-Z_]{1,21}(=|\%3D) [OR] # Block out any script trying to base64_encode crap to send via URL RewriteCond %{QUERY_STRING} base64_encode.*\(.*\) [OR] # Block out any script that includes a <script> tag in URL RewriteCond %{QUERY_STRING} (\<|%3C).*script.*(\>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Send all blocked request to homepage with 403 Forbidden error! RewriteRule ^(.*)$ index.php [F,L] # ########## End - Rewrite rules to block out some common exploits # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root) #RewriteBase / ########## Begin - Joomla! core SEF Section # RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/index.php RewriteCond %{REQUEST_URI} (/|\.php|\.html|\.htm|\.feed|\.pdf|\.raw|/[^.]*)$ [NC] RewriteRule (.*) index.php RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L] # ########## End - Joomla! core SEF Section EDIT Yes, mail.ctrc.sk.ca/cms/ is the root directory. Currently the DNS redirects from ctrc.sk.ca and www.ctrc.sk.ca to mail.ctrc.sk.ca/cms. However when it redirects the user still sees the mail.ctrc.sk.ca/cms/ url and I want them to only see www.ctrc.sk.ca.

    Read the article

  • Apache, ISPConfig & Roundcube alias

    - by Jay Zus
    I'm using ISPConfig to setup all the websites on my server but I also like to try to fiddle with the files myself to see how it works. Like you guessed, yes, I've broken something. I can't access my webmail setup by default on the server with the alias /webmail (I access it via the http://xxx.xxx.xx.xx/webmail) Firefox tells me that The page isn't redirecting properly Firefox has detected that the server is redirecting the request for this address in a way that will never complete. So I cleaned up my vhost files and the one of my websites work as intended, I think that the problem comes from my default.vhost. Here's the content of it <Directory /var/www/> AllowOverride None Order Deny,Allow Deny from all </Directory> <VirtualHost *:80> DocumentRoot /var/www/ ServerAdmin [email protected] <Directory /var/www/> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> This isn't a lot and I can't really see what's wrong with it, all I know is that it isn't the one that came with ISPConfig and I can't find an original one. Here's the roundcube.conf that loads with apache # Those aliases do not work properly with several hosts on your apache server # Uncomment them to use it or adapt them to your configuration # Alias /roundcube/program/js/tiny_mce/ /usr/share/tinymce/www/ # Alias /roundcube /var/lib/roundcube Alias /webmail /var/lib/roundcube/ # Access to tinymce files <Directory "/usr/share/tinymce/www/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny allow from all </Directory> <Directory /var/lib/roundcube/> Options +FollowSymLinks # This is needed to parse /var/lib/roundcube/.htaccess. See its # content before setting AllowOverride to None. AllowOverride All order allow,deny allow from all </Directory> # Protecting basic directories: <Directory /var/lib/roundcube/config> Options -FollowSymLinks AllowOverride None </Directory> <Directory /var/lib/roundcube/temp> Options -FollowSymLinks AllowOverride None Order allow,deny Deny from all </Directory> <Directory /var/lib/roundcube/logs> Options -FollowSymLinks AllowOverride None Order allow,deny Deny from all </Directory> I didn't touch that file, but I guess it has something to do with the problem. I just can't find why it doesn't work. EDIT: This is the errors in my apache's log [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 540 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0"

    Read the article

  • VirtualServer reverseproxy works locally, but not from client

    - by Yep
    Setup: 2 Webservers pointed to 127.0.0.1:8080 and :8081. Curl validates they work as expected. Apache with the following virt hosts: NameVirtualHost 192.168.1.1:80 <VirtualHost 192.168.1.1:80> ServerAdmin [email protected] ProxyPass / http://127.0.0.1:8080/ ProxyPassReverse / http://127.0.0.1:8080/ ServerName 192.168.1.1 ServerAlias http://192.168.1.1 </VirtualHost> NameVirtualHost 192.168.1.2:80 <VirtualHost 192.168.1.2:80> ServerAdmin [email protected] ProxyPass / http://127.0.0.1:8081/ ProxyPassReverse / http://127.0.0.1:8081/ ServerName 192.168.1.2 ServerAlias http://192.168.1.2 </VirtualHost> On the server I can curl to the virtualhosts and receive appropriate responses. (curl 192.168.1.1 gives me the webservers response from localhost:8080, etc) remote hosts cannot however connect to 192.168.1.1 or .2 at all. What am I missing? Re: comments Yes, the default directory Directive is still in place. # Deny access to root file system <Directory /> Options None AllowOverride None Order Deny,Allow deny from all </Directory> No apache logs are generated when trying to reach 192.168.1.1 remotely. They do get generated when curl from local. If I point the webservers to *:8080 and *:8081 instead of binding to localhost, I can access them from a remote host via 192.168.1.1 and 192.168.1.2 if i specify the 8080 and 8081 ports (both ports work on both IP's, which is what I'm trying to avoid with apache reverse proxy bind to 80 on each interface) Edit2: curl verbose output: (similar for second webserver, and for 127.0.0.1:portnum) [user@host mingle_12_2_1]$ curl -v 192.168.1.1 * About to connect() to 192.168.1.1 port 80 * Trying 192.168.1.1... connected * Connected to 192.168.1.1 (192.168.1.1) port 80 > GET / HTTP/1.1 > User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 > Host: 192.168.1.1 > Accept: */* > < HTTP/1.1 302 Found < Date: Tue, 16 Oct 2012 16:22:08 GMT < Server: Jetty(6.1.19) < Cache-Control: no-cache < Location: http://192.168.1.1/install < X-Runtime: 130 < Content-Type: text/html; charset=utf-8 < Content-Length: 94 < Connection: close Closing connection #0 <html><body>You are being <a href="http://192.168.1.1/install">redirected</a>.</body></html> log from the request local 192.168.1.1 - - [16/Oct/2012:12:22:08 -0400] "GET / HTTP/1.1" 302 94 no apache access log or error log generated when requests from remote clients.

    Read the article

  • All my sites are 403 but the server is running. Errors on startup

    - by Craig
    We gave access to a contractor to install a firewall and somehow while he was doing it he fracked something up. Everything went off-line about 24 hours ago and we are effectively out of business until I solve this and the person who messed up the thing is not returning calls. I found a few errors. First, I'm not a server guy - I can look at log files and normally everything runs fine. All 'services' are running according to 1and1 server monitoring and mail is being delivered just fine. The whole thing was off-line until I (probably stupidly) updated the kernel from 6.2 to 6.3 this morning and I got everything back except the http access. All the domains (~200 of them) are returning a 403 error and nothing is recorded in the access log. On every restart I see this error in the messages log file: init: Failed to spawn ttyS0 main process: unable to execute: No such file or directory and a little later these: kernel: WARNING: at kernel/sched.c:5914 thread_return+0x232/0x79d() (Not tainted) kernel: Hardware name: X9SCL/X9SCM kernel: Modules linked in: xt_iprange iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 ext4 jbd2 serio_raw i2c_i801 i2c_core sg iTCO_wdt iTCO_vendor_support e1000e ext3 jbd mbcache raid1 sd_mod crc_t10dif ahci dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] kernel: Pid: 367, comm: md3_raid1 Not tainted 2.6.32-220.2.1.el6.x86_64 #1 kernel: Call Trace: kernel: [<ffffffff81069997>] ? warn_slowpath_common+0x87/0xc0 kernel: [<ffffffff810699ea>] ? warn_slowpath_null+0x1a/0x20 kernel: [<ffffffff814eccc5>] ? thread_return+0x232/0x79d kernel: [<ffffffff8126a4d9>] ? cpumask_next_and+0x29/0x50 kernel: [<ffffffff813e9c05>] ? md_super_wait+0x55/0x90 kernel: [<ffffffff81090a10>] ? autoremove_wake_function+0x0/0x40 kernel: [<ffffffff813ebf46>] ? md_update_sb+0x206/0x3f0 kernel: [<ffffffff813ee922>] ? md_check_recovery+0x3f2/0x6d0 kernel: [<ffffffffa005b129>] ? raid1d+0x49/0x1050 [raid1] kernel: [<ffffffff814ed985>] ? schedule_timeout+0x215/0x2e0 kernel: [<ffffffff814ef447>] ? _spin_unlock_irqrestore+0x17/0x20 kernel: [<ffffffff813eb336>] ? md_thread+0x116/0x150 kernel: [<ffffffff81090a10>] ? autoremove_wake_function+0x0/0x40 kernel: [<ffffffff813eb220>] ? md_thread+0x0/0x150 kernel: [<ffffffff810906a6>] ? kthread+0x96/0xa0 kernel: [<ffffffff8100c14a>] ? child_rip+0xa/0x20 kernel: [<ffffffff81090610>] ? kthread+0x0/0xa0 kernel: [<ffffffff8100c140>] ? child_rip+0x0/0x20 And something is wrong with the Named/BIND resulting in the same error for all domains: zone DOMAINEXAMPLE.com/IN: loading from master file DOMAINEXAMPLE.com failed: file not found zone DOMAINEXAMPLE.com/IN: not loaded due to errors. _default/DOMAINEXAMPLE.com/IN: file not found I'm pretty sure this is not enough information to solve the problem, but I'm willing to engage someone who can work this out for me. Any help would be greatly appreciated.

    Read the article

  • Rails program is not being run on apache + passenger - only directory listing

    - by user909943
    I have successfully installed Phusion passenger + Apache 2 + Rails 3.1 program + git on a linux Debian 6. I ran passenger-install-apache2-module and followed the configuration instructions. I followed also the setup instructions at https://help.ubuntu.com/community/RubyOnRails#Configure%20Apache My program is in /var/www/myrailsproject and runs fine on webrick on my mac. When going to myhomepage.com (example) I only see directory listing. By preventing directory listing and setting Options -Indexes in < Document tag in /etc/apache2/sites-available default or myhomepage.com I get an error on my website: Forbidden You don't have permission to access / on this server. Apache/2.2.19 (Debian) Server at myhomepage.com Port 80 In /etc/apache2/apache.conf I added: ServerName myhomepage.com LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.8/ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.8 PassengerRuby /usr/bin/ruby1.8 In /etc/apache2/sites-available myhomepage.com: < VirtualHost *:80 ServerName myhomepage.com ServerAlias www.myhomepage.com DocumentRoot /var/www/myrailsproject/public ErrorLog /var/www/logs/error.log CustomLog /var/www/logs/access.log combined RailsEnv test RackEnv test RailsBaseURI /mayrailsproject < Directory /var/www/myrailsproject> Options -Indexes FollowSymLinks -MultiViews AllowOverride all Order allow,deny allow from all < /Directory> < Directory /var/www/myrailsproject/public> AllowOverride All Options -Indexes +FollowSymLinks MultiViews Order allow,deny Allow from all < /Directory> RailsSpawnMethod smart PassengerPoolIdleTime 1000 RailsAppSpawnerIdleTime 0 RailsFrameworkSpawnerIdleTime 0 PassengerMaxRequests 5000 PassengerStatThrottleRate 5 < /VirtualHost I think I have tried all possible combinations of values and variables in < Directory (and < Directory /, < Directory /var/www etc.) the dafault looks like: < VirtualHost *:80 ServerName myhomepage.com RailsBaseURI /myrailsproject DocumentRoot /var/www/myrailsproject/public RackEnv test RailsEnv test < Directory /var/www/myrailsproject> Options -Indexes FollowSymLinks -MultiViews AllowOverride None Order deny,allow Deny from all < /Directory> <Directory /root/public/myrailsproject/public> Options -Indexes FollowSymLinks -MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" < Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 < /Directory> < /VirtualHost So I wonder why my rails project is not being run, only directory listing. I do not have any index file in my project, routes.rb routes to root :to = 'static_pages#home' I think all permissions are as they should be. Thanks a lot in advance Regards Íris

    Read the article

  • Apache config that uses two document roots based on whether the requested resource exists in the first

    - by mattalexx
    Background I have a client site that consists of a CakePHP installation and a Magento installation: /web/example.com/ /web/example.com/app/ <== CakePHP /web/example.com/app/webroot/ <== DocumentRoot /web/example.com/app/webroot/store/ <== Magento /web/example.com/config/ <== Site-wide config /web/example.com/vendors/ <== Site-wide libraries The server runs Apache 2.2.3. The problem The whole company has FTP access and got used to clogging up the /web/example.com/, /web/example.com/app/webroot/, and /web/example.com/app/webroot/store/ directories with their own files. Sometimes these files need HTTP access and sometimes they don't. In any case, this mess makes my job harder when it comes to maintaining the site. Code merges, tarring the live code, etc, is very complicated and usually requires a bunch of filters. Abandoned solution At first, I thought I would set up a new subdomain on the same server, move all of their files there, and change their FTP chroot. But that wouldn't work for these reasons: Firstly, I have no idea (and neither do they remember) what marketing materials they've sent out that contain URLs to certain resources they've uploaded to the server, using the main domain, and also using abstract subdomains that use the main virtual host because it has ServerAlias *.example.com. So suddenly having them only use static.example.com isn't feasible. Secondly, The PHP scripts in their projects are potentially very non-portable. I want their files to stay in as similar an environment as they were built as I can. Also, I do not want to debug their code to make it portable. Half-baked solution After some thought, I decided to find a way to section off the actual website files into another directory that they would not touch. The company's uploaded files would stay where they were. This would ensure that I didn't break any of their projects that needed HTTP access. It would look something like this: /web/example.com/ <== A bunch of their files are in here /web/example.com/app/webroot/ <== 1st DocumentRoot; A bunch of their files are in here /web/example.com/app/webroot/store/ <== Some more are in here /web/example.com/site/ <== New dir; Contains only site files /web/example.com/site/app/ <== CakePHP /web/example.com/site/app/webroot/ <== 2nd DocumentRoot /web/example.com/site/app/webroot/store/ <== Magento /web/example.com/site/config/ <== Site-wide config /web/example.com/site/vendors/ <== Site-wide libraries After I made this change, I would not need to pay attention to anything except for the stuff within /web/example.com/site/ and my job would be a lot easier. I would be the only one changing stuff in there. So here's where the Apache magic would happen: I need an HTTP request to http://www.example.com/ to first use /web/example.com/app/webroot/ as the document root. If nothing is found (no miscellaneous uploaded company projects are found), try finding something within /web/example.com/site/app/webroot/. Another thing to keep in mind is, the site might have some problems if the $_SERVER['DOCUMENT_ROOT'] variable reads /web/example.com/app/webroot/ but the actual files are within /web/example.com/site/app/webroot/. It would be better if the DOCUMENT_ROOT environment variable could be /web/example.com/site/app/webroot/ for anything within the /web/example.com/site/app/webroot/ directory. Conclusion Is my half-baked solution possible with Apache 2.2.3? Is there a better way to solve this problem?

    Read the article

  • Apache URL rewriting in reverse proxy

    - by Jeremy Gooch
    I'm deploying Apache in front of a Karaf-hosted application (Apache and Karaf are on separate servers). I want Apache to operate as a reverse proxy and also to hide part of the URL. The URL to get the log-in page of the application directly from the app server is http://app-server:8181/jellyfish. Pages are served by the Jetty instance running within Karaf. Of course, this behaviour would usually be blocked by the firewall for everything except the reverse proxy server. With the firewall off, if you hit this URL then Jetty loads the log-in page. The browser's address bar correctly changes to http://app-server:8181/jellyfish/login?0 and everything works. What I want is for http://web-server (i.e. from the root) to map to Jetty on the app server with the name of the app (jellyfish) suppressed. e.g. The browser would change to show http://web-server/login?0 in the address bar and all subsequent URLs and content would be served with the web-server's domain and without the jellyfish clutter. I can get Apache to operate as a simple reverse proxy, using the following config (snippet):- ProxyPass /jellyfish http://app-server:8181/jellyfish ProxyPassReverse / http://app-server:8181/ ...but this requires the browser's URL to contain jellyfish and going to the root URL (http://web-server) gives a 404 Not Found. I've spent a lot of time trying to use mod_rewrite with and without its [P] flag to get around this, but without success. I then tried the ProxyPassMatch directive, but I can't seem to get that quite correct either. Here's the current config, as is loaded into /etc/apache2/sites-available/ on the web server. Note that there is a locally-hosted images directory. I've also kept the mod_rewrite proxy exploit protection and am suppressing a couple of mod_security rules that were giving false positives. <VirtualHost *:80> ServerAdmin admin@drummer-server ServerName drummer-server ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /images/ "/var/www/images/" RewriteEngine On RewriteCond %{REQUEST_URI} !^$ RewriteCond %{REQUEST_URI} !^/ RewriteRule .* - [R=400,L] ProxyPass /images ! ProxyPassMatch ^/(.*) http://granny-server:8181/jellyfish/$1 ProxyPassReverse / http://granny-server:8181/jellyfish ProxyPreserveHost On SecRuleRemoveById 981059 981060 <Directory "/var/www/images"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> </VirtualHost> If I go to http://web-server, I get redirected to http://web-server/jellyfish/home but this gives a 404, with a complaint about trying to access /jellyfish/jellyfish/home - NB the browser's address bar does not contain the double /jellyfish. HTTP ERROR 404 Problem accessing /jellyfish/jellyfish/home. Reason: Not Found And, if I go to http://web-server/login, I get redirected to http://web-server/jellyfish/login?0 but this gives a 404, with a complaint about trying to access /jellyfish/jellyfish/login. HTTP ERROR 404 Problem accessing /jellyfish/jellyfish/login. Reason: Not Found So, I'm guessing I'm somehow passing through the rules twice. I am also slightly bemused as to where the home bit of the URL comes from in the first example. Can someone point me in the right direction, please? Thanks, J.

    Read the article

  • Publishing an Excel spreadsheet using Microsoft SBS 2008 to a web page that is viewable by mobile ph

    - by Dave Heath
    I am getting well out of my “superuser” depth here and would love some support. At work we have an Excel workbook (*.xls format circa Office 2003) which maintains our “engineers” timesheet. This handles what events we are doing across the year and how many “work units” it is. As far as a workbook goes, it is fairly simple with just a few =SUM(range) cells and some linked across sheets (12 sheets, one for each month) It is stored on a server, in a folder that provides “management” with full access and “engineers” with read-only access. The workbook itself is read-only for “engineers” and full access for “management”. I think these permissions are controlled through Active Directory. The workbook is protected with a password, assumingly to allow “management” to edit it even if they are working at a terminal logged in as an “engineer”. This protection prevents “engineers” from going to certain cells to see formulae and therefore editing them. The workbook has a macro which saves and closes it ten minutes after opening. This is to stop other “management” from being locked out by any one person who has logged in with editing privileges. I hope this is making sense to someone... :S Now then, we have Microsoft Small Business Server 2008. We have a shiny new web-based login for when we are offsite so we can get to Exchange webmail and our internal site (which uses Sharepoint 3.0). “Management” would like to be able to publish this timesheet automatically after changes (they don’t want to have to do anything different to what they are currently doing) so that using an iPhone “engineers” can check on it while out of the office. I am currently having a look at “Excel Services” for Office 2007 on TechNet but I am not sure if I am running down the right garden path at the moment. < EDIT This seems to suggest that I have to have Sharepoint Server 2007, with no mention of Sharepoint 3.0... ... "MOSS builds on WSS by adding both core features as well as end user web parts" - Wikipedia entry for Microsoft Office SharePoint Server (MOSS) this is not good news... "...and using the ASP.NET APIs, web parts can be written to extend the functionality of WSS." Wikipedia entry for Windows Sharepoint Services. Could this bring back what I need? Is this good news? Do I need to start learning ASP.NET? This link here implies that we need MOSS to do what I want and the bosses say we aint' getting it. http://serverfault.com/questions/20198/what-is-some-cool-things-you-can-do-with-sharepoint-2007/22128#22128 Back to the drawing board. < /EDIT Please could someone suggest some “further reading” for me to help point me in the right direction or to put me back on the right track. Many thanks. I will try to keep this up to date with how I get on.

    Read the article

  • Pairing Bluetooth device with PIN fails

    - by Pikaro
    I'm trying to pair my old BlackBerry 8310 to my Linux desktop (up-to-date Debian Sid, 3.15-10.dmz.1-liquorix-amd64) by using blueman and its associated tools. Scanning for the device works equally well for both sides; however, I am unable to pair the two once it comes to entering the PIN. If I scan from my PC, I have two options in blueman-manager regarding my phone: Directly selecting "pair", or selecting "setup". If I select "pair", nothing happens on my desktop, but the phone asks me to enter a PIN; if I do so, it reports that pairing has failed. During that, nothing is logged to the console. Selecting "setup" opens a configuration dialog that allows for entering or generating a PIN. Regardless, I get to a screen that tells me to enter the PIN on the phone, and at the same time, the phone pops up the equivalent dialog. This would be what one would expect to work; but whatever I enter (naturally, the same on both), both devices report that pairing has failed, and blueman-manager logs init_services (/usr/lib/python2.7/dist-packages/blueman/main/Device.py:73) Loading services org.bluez.Error.AuthenticationFailed: Authentication Failed If I instead try to pair from the phone, I cannot see any kind of reaction from my desktop - all I get is the equivalent "pairing failed" message from the BlackBerry after I entered a PIN in the dialog that pops up there. hcitool scan and hciconfig -a work without complaints, but I cannot find a way to try the pairing as a whole on the console since bluez-simple-agent seems to have been discontinued and this recommendation is everywhere on Google. hcitool cc as root opens the PIN dialog on the phone, then fails with "Input/Output error" once I enter it. The user is not permitted to execute this command. I also tried creating /usr/lib/bluetooth/<MAC>/pincodes to manually define a persistent PIN, which seems to have had no effect. The same goes for running the different commands as root, though I'm really confused about the internal structure of the Bluetooth subsystem now: They usually and inconsistently failed with Python or DBUS errors or just showed the same results. The only other Bluetooth device I have around are a pair of Creative speakers. Trying "setup" asks me to enter a key on them, which is impossible. If I try "pair", I'm asked for a PIN as I should, but no pairing takes place, and no errors appear on the console. (It just repeats their name a few times.) Interestingly, I tried that before writing my question, and nothing happened in terms of PIN questions, just like with the BlackBerry, which still shows no change. I don't think I actively changed anything since then. The BlackBerry can pair with and connect to the speakers, and everything goes as one would expect, so the problem is definitely with my desktop. So thus my questions: What is that PIN window generated by, and why does it seem to appear randomly? How can I find out what, exactly, fails after trying to add the speakers, as this may give me a clue? Is there any kind of complete log that concerns itself with Bluetooth? What data can I provide to make this more solvable?

    Read the article

  • IPv6: Should I have private addresses?

    - by AlReece45
    Right now, we have a rack of servers. Every server right now has at least 2 IP addresses, one for the public interface, another for the private. The servers that have SSL websites on them have more IP addresses. We also have virtual servers, that are configured similarly. Private Network The private range is currently just used for backups and monitoring. Its a gigabit port, the interface usage does not usually get very high. There are other technologies we're considering using that would use this port: iSCSI (implementations usually recommends dedicating an interface to it, which would be yet another IP network), VPN to get access to the private range (something I'd rather avoid) dedicated database servers LDAP centralized configuration (like puppet) centralized logging We don't have any private addresses in our DNS records (only public addresses). For our servers to utilize the correct IP address for the right interface (and not hard code the IP address) probably requires setting up a private DNS server (So now we add 2 different dns entries to 2 different systems). Public Network Our public range has a variety of services include web, email, and ftp. There is a hardware firewall between our network and the "public" network. We have (relatively secure) method to instruct the firewall to open and close administrative access (web interfaces, ssh, etc) for our current IP address. With either solution discussed, the host-based firewalls will be configured as well. The public network currently runs at a dedicated 20Mbps link. There are a couple of legacy servers with fast-ethernet ports, but they are scheduled for decommissioning. All of the other production boxes have at least 2 Gigabit Ethernet ports. The more traffic-heavy servers have 4-6 available (none is using more than the 2 Gigabit ports right now). IPv6 I want to get an IPv6 prefix from our ISP. So at least every "server" has at least one IPv6 interface. We'll still need to keep the IPv4 addressees up and available for legacy clients (web servers and email at the very least). We have two IP networks right now. Adding the public IPv6 address would make it three. Just use IPv6? I'm thinking about just dumping the private IPv4 range and using the IPv6 range as the primary means of all communications. If an interface starts reaching its capacity, utilize the newly free interfaces to create a trunk. It has the advantage that if either the public or private traffic needs to exceed 1Gbps. The traffic for each interface is already analyzed on a regular basis to predict future bandwidth use. In the rare instances where bandwidth unexpected peaks: utilize QoS to ensure traffic (like our limited SSH access) is prioritized correctly so the problem can be corrected (if possible, our WAN is the bottleneck right now). It also has the advantage of not needing to make an entry for every private address. We may have private DNS (or just LDAP), but it'll be much more limited in scope with less entries to duplicate. Summary I'm trying to make this network as "simple" as possible. At the same time, I want to make sure its reliable, upgradeable, scalable, and (eventually) redundant. Having one IPv6 network, and a legacy IPv4 network seems to be the best solution to me. Regarding using assigned IPv6 addresses for both networks, sharing the available bandwidth on one (more trunked if needed): Are there any technical disadvantages (limitations, buffers, scalability)? Are there any other security considerations (asides from firewalls mentioned above) to consider? Are there regulations or other security requirements (like PCI-DSS) that this doesn't meet? Is there typical software for setting up a Linux network that doesn't have IPv6 support yet? (logging, ldap, puppet) Some other thing I didn't consider?

    Read the article

  • PHP, Apache and curl: Differences between Windows and Linux?

    - by beginner_
    I'm trying to run my php App on Ubuntu Server 11.10. This App works fine under Apache + PHP in windows. I have other applications that I can simply copy&paste between the 2 OS and they work on both. (These don't use cURL). However this one uses the php library tonic (RESTful webservices) and makes us of php cURL module. The issue is I'm not getting an error message which makes it impossible to find the issue. I (must) use NTLM authentication and this is done with AuthenNTLM Apache Module: Order allow,deny Allow from all PerlAuthenHandler Apache2::AuthenNTLM AuthType ntlm AuthName "Protected Access" require valid-user PerlAddVar ntdomain "domainName server" PerlSetVar defaultdomain domainName PerlSetVar ntlmsemtimeout 2 PerlSetVar ntlmdebug 1 PerlSetVar splitdomainprefix 0 All files that cURL needs to fetch override AuthenNTLM authentication: order deny,allow deny from all allow from 127.0.0.1 Satisfy any Since these files are only fectehd by cURL from same server, access can be limited to localhost. Possible issues are: NTLM auth isn't overridden for files requested through cURL (even though AllowOverride All is set) curl works differently on linux $ch = curl_init(); curl_setopt($ch, CURLOPT_COOKIE, $strCookie); curl_setopt($ch, CURLOPT_URL, $baseUrl . $queryString); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $html = curl_exec($ch); curl_close($ch); other? Apache log says: [error] Bad/Missing NTLM/Basic Authorization Header for /myApp/webservice/local/viewList.php But this directory should override NTLM authentication using curl command line from windows to access same resource i get: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html> <head> <title>406 Not Acceptable</title> </head> <body> <h1>Not Acceptable</h1> <p>An appropriate representation of the requested resource /myApp/webservice/myResource could not be found on this server.</p> Available variants: <ul> <li><a href="myResource.php">myResource.php</a> , type application/x-httpd-php</li> </ul> <hr> <address>Apache/2.2.20 (Ubuntu) Server at localhost Port 80</address> </body> </html> Note: This is duplicate from http://stackoverflow.com/questions/9821979/php-curl-on-linux-what-is-the-difference-to-curl-on-windows Is it was suggested I post it here. EDIT: Please see Ubuntu Server: Apache2 seems to attach .php to URI as I discovered why it does not work but need help so the issue does not occur anymore. ANSWER: The issue is the default Apache configuration on Ubuntu: Options Indexes FollowSymLinks MultiViews MultiViews is changing request_uri from myResource to myResource.php. Solutions: disable MultiViews in .htaccess: Options -MultiViews remove MultiViews from default config rename the file as example to myResourceClass I chose last option because that should work regardless of configuration and I only have 3 such files so the change took about 30 secs...

    Read the article

  • Print directly to CUPS server from non-local clients (Ubuntu 14.04)

    - by OEP
    I set up a CUPS server with a few queues and printing from local clients (the CUPS test page and Samba) seems to work just fine. It seems like the CUPS server is denying non-local clients though: 130.127.48.70 - - [03/Jun/2014:14:29:19 -0400] "POST /printers/m137 HTTP/1.1" 200 390 Validate-Job successful-ok 130.127.48.70 - - [03/Jun/2014:14:29:19 -0400] "POST /printers/m137 HTTP/1.1" 200 339 Create-Job client-error-not-authorized localhost - - [03/Jun/2014:14:40:50 -0400] "POST /printers/m137 HTTP/1.1" 200 410869 Print-Job successful-ok This makes me think I have some sort of host-based restriction in my configuration file, but I can't find it. I've even set my default policy to Allow all only to get the same log message. I'm working from a configuration file which had previously worked on an older version of CUPS, which looks quite similar to the example cupsd.conf. I could be wrong but it looks like that final <Limit All> block ought to allow the actions the logs complain about. MaxLogSize 2000000000 # Log general information in error_log - change "info" to "debug" for # troubleshooting... LogLevel info #AccessLog syslog #ErrorLog syslog #PageLog syslog # Administrator user group... SystemGroup sys root lp # Only listen for connections from the local machine. Listen 0.0.0.0:631 Listen :::631 Listen /var/run/cups/cups.sock ServerName <snipped> # Show shared printers on the local network. Browsing Off BrowseOrder allow,deny # (Change '@LOCAL' to 'ALL' if using directed broadcasts from another subnet.) BrowseAllow @LOCAL # Default authentication type, when authentication is required... DefaultAuthType Basic # Restrict access to the server... <Location /> Order allow,deny Allow all </Location> # Restrict access to the admin pages... <Location /admin> AuthType Default Require user @SYSTEM Encryption Required Order allow,deny Allow all </Location> # Restrict access to configuration files... <Location /admin/conf> AuthType Default Require user @SYSTEM Encryption Required Order allow,deny Allow all </Location> # Set the default printer/job policies... <Policy default> # Job-related operations must be done by the owner or an administrator... <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job CUPS-Move-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order allow,deny </Limit> </Policy>

    Read the article

  • nginx + php-fpm - where are my $_GET params?

    - by egis
    I have a strange problem here. I just moved from apache + mod_php to nginx + php-fpm. Everything went fine except this one problem. I have a site, let's say example.com. When I access it like example.com?test=get_param $_SERVER['REQUEST_URI'] is /?test=get_param and there is a $_GET['test'] also. But when I access example.com/ajax/search/?search=get_param $_SERVER['REQUEST_URI'] is /ajax/search/?search=get_param yet there is no $_GET['search'] (there is no $_GET array at all). I'm using Kohana framework. which routes /ajax/search to controller, but I've put phpinfo() at index.php so I'm checking for $_GET variables before framework does anything (this means that disapearing get params aren't frameworks fault). My nginx.conf is like this worker_processes 4; pid logs/nginx.pid; events { worker_connections 1024; } http { index index.html index.php; autoindex on; autoindex_exact_size off; include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 128; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; error_log logs/error.log debug; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 2; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include sites-enabled/*; } and example.conf is like this server { listen 80; server_name www.example.com; rewrite ^ $scheme://example.com$request_uri? permanent; } server { listen 80; server_name example.com; root /var/www/example/; location ~ /\. { return 404; } location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /usr/local/nginx/conf/fastcgi_params; } location ~* ^/(modules|application|system) { return 403; } # serve static files directly location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt)$ { access_log off; expires 30d; } } fastcgi_params is like this fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_param QUERY_STRING $query_string; fastcgi_param PATH_INFO $fastcgi_path_info; What is the problem here? By the way there are few more sites on the same server, both Kohana based and plain php, that are working perfectly.

    Read the article

  • nginx + php-fpm - where are my $_GET params?

    - by egis
    Hello everyone, I have a strange problem here. I just moved from apache + mod_php to nginx + php-fpm. Everything went fine except this one problem. I have a site, let's say example.com. When I access it like example.com?test=get_param $_SERVER['REQUEST_URI'] is /?test=get_param and there is a $_GET['test'] also. But when I access example.com/ajax/search/?search=get_param $_SERVER['REQUEST_URI'] is /ajax/search/?search=get_param yet there is no $_GET['search'] (there is no $_GET array at all). I'm using Kohana framework. which routes /ajax/search to controller, but I've put phpinfo() at index.php so I'm checking for $_GET variables before framework does anything (this means that disapearing get params aren't frameworks fault). My nginx.conf is like this worker_processes 4; pid logs/nginx.pid; events { worker_connections 1024; } http { index index.html index.php; autoindex on; autoindex_exact_size off; include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 128; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; error_log logs/error.log debug; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 2; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include sites-enabled/*; } and example.conf is like this server { listen 80; server_name www.example.com; rewrite ^ $scheme://example.com$request_uri? permanent; } server { listen 80; server_name example.com; root /var/www/example/; location ~ /\. { return 404; } location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /usr/local/nginx/conf/fastcgi_params; } location ~* ^/(modules|application|system) { return 403; } # serve static files directly location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt)$ { access_log off; expires 30d; } } fastcgi_params is like this fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_param QUERY_STRING $query_string; fastcgi_param PATH_INFO $fastcgi_path_info; What is the problem here? By the way there are few more sites on the same server, both Kohana based and plain php, that are working perfectly.

    Read the article

  • Set up two IP addresses with one gateway?

    - by Ahmed
    I would like to ask if it is possible to set up two static IPs from same subnet through one gateway? and How if it is? What I am interested in is described here Routing for multiple uplinks/providers, but in my case I have two IP addresses from one provider, both are on same subnet and off course I have internet access on both. I have two NICs, but I don't mind to go with one if that makes it possible. Any thought is appreciated!

    Read the article

  • How To Start Your Own Professional Blog with WordPress

    - by Matthew Guay
    Would you like to start your own blog or website?  With a free WordPress  account, it’s free and easy to get started creating your own professional quality blog site. This is the first part in a series on how to create your own professional quality blog site. No, we’re not talking about some cheapo looking blog from Blogger or something on Facebook, but creating a quality blog you can be proud of and present to millions of readers online. WordPress is one of the most popular blogging platforms, powering hundreds of high-profile websites and blogs around the world.  It’s both powerful and easy to use, which makes it great whether you’re just starting out or are a blogging pro.  To start out with your blogging project WordPress is completely free, and you can use the online interface or install the WordPress software on your own server and blog from there. Getting Started You can start a blog in just a few minutes.  Head over to WordPress.com and click Sign up now on the right-hand side of the main page. Enter a username and password, check that you agree with the legal terms, select the “Gimme a blog” bullet, and click Next. WordPress may inform you that your username is already taken, simply choose a new one and try again. Next, choose a domain for your blog.  This will be the address for your site, and cannot be changed, so be sure to choose exactly what you want.  If you’d prefer your address to be yourname.com instead of yourname.wordpress.com, you can add your own domain for a fee after your blog is setup…but we’ll cover that later. Once you click signup, you will be sent a confirmation email.  While you wait for the email to arrive you can go ahead and enter in your name and a short bio about yourself. When you receive your confirmation email, click the link.  Congratulations; you now have your own blog! You can view your new blog immediately, though the default theme isn’t very interesting without your content and pictures. Back on the page you opened from the email, click Login to access your blog’s administration page and to start adding stuff to your blog.  You can also access your blog’s admin page anytime by from yourname.wordpress.com/admin, substituting your own blog name for yourname. Enter your username and password, then click Log in to get started. Adding Content to your WordPress.com Blog When you sign in to your WordPress blog, you’ll first see the WordPress Admin page.  Here you can see recent posts and comments, and you can see stats of how many people have visited your site.  You can also access all of your blog tools and settings right from this page. To add a new post to your blog, click the Posts link on the left, then click “Add New” either on the left menu or on the top of the Edit Posts page.  Or, if you want to edit the default first post, hover over it and select Edit. Or click the New Posts button on the top of the page.  This menu bar is always visible whenever you’re logged in, so it’s an easy way to add a post. The editor lets you easily write anything you want in a Microsoft Word-style editor.  You can format your text, add lists, links, quotes, and more.  When you’re ready to share your content with the world, click Publish on the right side. To add pictures or other files, click the picture icon beside “Upload/Insert”.  Your free blog account can store up to 3Gb of pictures and documents which will definitely give you a good start. Click Select Files, and then choose the pictures or documents you want to add to your post. When the pictures have uploaded, you can add a caption and choose how to position the picture.  When you’re finished, select “Insert into Post”.   Or, if you want to add a video, click the video button.  You have to add a paid upgrade to upload videos directly, but you can add YouTube and other online videos for free. Click the “From URL” tab, and then paste the link to the YouTube video and click Insert into post. If you’re a code geek, click the HTML tab in the editor and edit the HTML of your blog post the geeky way. Once you’ve added all your content and edited it the way you want, click the Publish button on the right of the editor.  Or, you can click Preview to make sure it looks right, and then click Publish. Here’s our blog with the new blog post containing a picture and video.  While you’re getting to know you’re way around the controls in WordPress, the Preview feature will be your best friend while you try to organize the content to your liking.   Conclusion It only takes a couple minutes to get started blogging at WordPress.com. Whether you want to write about your daily life, share pictures of your children, or review the latest books and gadgets, WordPress.com is a great place to get started for free.  But we’ve only covered a small portion of the WordPress features…but this should get you started. Check back for more WordPress and blogging coverage coming up soon! Links Signup for a free WordPress.com account Similar Articles Productive Geek Tips Add Social Bookmarking (Digg This!) Links to your Wordpress BlogHow-To Geek SoftwareProtecting Your WordPress Admin Panel From Hackers With .htaccessMake a Backup Copy of your Production Wordpress Blog on UbuntuLinux QuickTip: Downloading and Un-tarring in One Step TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle !

    Read the article

  • Integrate SharePoint 2010 with Team Foundation Server 2010

    - by Martin Hinshelwood
    Our client is using a brand new shiny installation of SharePoint 2010, so we need to integrate our upgraded Team Foundation Server 2010 instance into it. In order to do that you need to run the Team Foundation Server 2010 install on the SharePoint 2010 server and choose to install only the “Extensions for SharePoint Products and Technologies”. We want out upgraded Team Project Collection to create any new portal in this SharePoint 2010 server farm. There a number of goodies above and beyond a solution file that requires the install, with the main one being the TFS2010 client API. These goodies allow proper integration with the creation and viewing of Work Items from SharePoint a new feature with TFS 2010. This works in both SharePoint 2007 and SharePoint 2010 with the level of integration dependant on the version of SharePoint that you are running. There are three levels of integration with “SharePoint Services 3.0” or “SharePoint Foundation 2010” being the lowest. This level only offers reporting services framed integration for reporting along with Work Item Integration and document management. The highest is Microsoft Office SharePoint Services (MOSS) Enterprise with Excel Services integration providing some lovely dashboards. Figure: Dashboards take the guessing out of Project Planning and estimation. Plus writing these reports would be boring!   The Extensions that you need are on the same installation media as the main TFS install and the only difference is the options you pick during the install. Figure: Installing the TFS 2010 Extensions for SharePoint Products and Technologies onto SharePoint 2010   Annoyingly you may need to reboot a couple of times, but on this server the process was MUCH smother than on our internal server. I think this was mostly to do with this being a clean install. Once it is installed you need to run the configuration. This will add all of the Solution and Templates that are needed for SharePoint to work properly with TFS. Figure: This is where all the TFS 2010 goodies are added to your SharePoint 2010 server and the TFS 2010 object model is installed.   Figure: All done, you have everything installed, but you still need to configure it Now that we have the TFS 2010 SharePoint Extensions installed on our SharePoint 2010 server we need to configure them both so that they will talk happily to each other. Configuring the SharePoint 2010 Managed path for Team Foundation Server 2010 In order for TFS to automatically create your project portals you need a wildcard managed path setup. This is where TFS will create the portal during the creation of a new Team project. To find the managed paths page for any application you need to first select the “Managed web applications”  link from the SharePoint 2010 Central Administration screen. Figure: Find the “Manage web applications” link under the “Application Management” section. On you are there you will see that the “Managed Paths” are there, they are just greyed out and selecting one of the applications will enable it to be clicked. Figure: You need to select an application for the SharePoint 2010 ribbon to activate.   Figure: You need to select an application before you can get to the Managed Paths for that application. Now we need to add a managed path for TFS 2010 to create its portals under. I have gone for the obvious option of just calling the managed path “TFS02” as the TFS 2010 server is the second TFS server that the client has installed, TFS 2008 being the first. This links the location to the server name, and as you can’t have two projects of the same name in two separate project collections there is unlikely to be any conflicts. Figure: Add a “tfs02” wildcard inclusion path to your SharePoint site. Configure the Team Foundation Server 2010 connection to SharePoint 2010 In order to have you new TFS 2010 Server talk to and create sites in SharePoint 2010 you need to tell the TFS server where to put them. As this TFS 2010 server was installed in out-of-the-box mode it has a SharePoint Services 3.0 (the free one) server running on the same box. But we want to change that so we can use the external SharePoint 2010 instance. Just open the “Team Foundation Server Administration Console” and navigate to the “SharePoint Web Applications” section. Here you click “Add” and enter the details for the Managed path we just created. Figure: If you have special permissions on your SharePoint you may need to add accounts to the “Service Accounts” section.    Before we can se this new SharePoint 2010 instance to be the default for our upgraded Team Project Collection we need to configure SharePoint to take instructions from our TFS server. Configure SharePoint 2010 to connect to Team Foundation Server 2010 On your SharePoint 2010 server open the Team Foundation Server Administration Console and select the “Extensions for SharePoint Products and Technologies” node. Here we need to “grant access” for our TFS 2010 server to create sites. Click the “Grant access” link and  fill out the full URL to the  TFS server, for example http://servername.domain.com:8080/tfs, and if need be restrict the path that TFS sites can be created on. Remember that when the users create a new team project they can change the default and point it anywhere they like as long as it is an authorised SharePoint location. Figure: Grant access for your TFS 2010 server to create sites in SharePoint 2010 Now that we have an authorised location for our team project portals to be created we need to tell our Team Project Collection that this is where it should stick sites by default for any new Team Projects created. Configure the Team Foundation Server 2010 Team Project Collection to create new sites in SharePoint 2010 Back on out TFS 2010 server we need to setup the defaults for our upgraded Team Project Collection to the new SharePoint 2010 integration we have just set up. On the TFS 2010 server open up the “Team Foundation Server Administration Console” again and navigate to the “Team Project Collections” node. Once you are there you will see a list of all of your TPC’s and in our case we have a DefaultCollection as well as out named and Upgraded collection for TFS 2008. If you select the “SharePoint Site” tab we can see that it is not currently configured. Figure: Our new Upgrade TFS2008 Team Project Collection does not have SharePoint configured Select to “Edit Default Site Location” and select the new integration point that we just set up for SharePoint 2010. Once you have selected the “SharePoint Web Application” (the thing we just configured) then it will give you an example based on that configuration point and the name of the Team Project Collection that we are configuring. Figure: Set the default location for new Team Project Portals to be created for this Team Project Collection This is where the reason for configuring the Extensions on the SharePoint 2010 server before doing this last bit becomes apparent. TFS 2010 is going to create a site at our http://sharepointserver/tfs02/ location called http://sharepointserver/tfs02/[TeamProjectCollection], or whatever we had specified, and it would have had difficulty doing this if we had not given it permission first. Figure: If there is no Team Project Collection site at this location the TFS 2010 server is going to create one This will create a nice Team Project Collection parent site to contain the Portals for any new Team Projects that are created. It is with noting that it will not create portals for existing Team Projects as this process is run during the Team Project Creation wizard. Figure: Just a basic parent site to host all of your new Team Project Portals as sub sites   You will need to add all of the users that will be creating Team Projects to be Administrators of this site so that they will not get an error during the Project Creation Wizard. You may also want to customise this as a proper portal to your projects if you are going to be having lots of them, but it is really just a default placeholder so you have a top level site that you can backup and point at. You have now integrated SharePoint 2010 and team Foundation Server 2010! You can now go forth and multiple your Team Projects for this Team Project Collection or you can continue to add portals to your other Collections.   Technorati Tags: TFS 2010,Sharepoint 2010,VS ALM

    Read the article

< Previous Page | 638 639 640 641 642 643 644 645 646 647 648 649  | Next Page >