Search Results

Search found 98146 results on 3926 pages for 'super mariowww developerit com'.

Page 2805/3926 | < Previous Page | 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812  | Next Page >

  • which is best smart automatic file replication solution for cloud storage based systems.

    - by TORr0t
    I am looking for a solution for a project i am working on. We are developing a websystem where people can upload their files and other people can download it. (similar to rapidshare.com model) Problem is, some files can be demanded much more than other files. The scenerio is like: I have uploaded my birthday video and shared it with all of my friend, I have uploaded it to myproject.com and it was stored in one of the cluster which has 100mbit connection. Problem is, once all of my friends want to download the file, they cant download it since the bottleneck here is 100mbit which is 15MB per second, but i got 1000 friends and they can only download 15KB per second. I am not taking into account that the hdd is serving same files. My network infrastrucre is as follows: 1 gbit server(client) and connected to 4 Nodes of storage servers that have 100mbit connection. 1gbit server can handle the 1000 users traffic if one of storage node can stream more than 15MB per second to my 1gbit (client) server and visitor will stream directly from client server instead of storage nodes. I can do it by replicating the file into 2 nodes. But i dont want to replicate all files uploadded to my network since it is costing much more. So i need a cloud based system, which will push the files into replicated nodes automatically when demanded to those files are high, and when the demand is low, they will delete from other nodes and it will stay in only 1 node. I have looked to gluster and asked in their irc channel that, gluster cant do such a thing. It is only able to replicate all the files or none of the files. But i need it the cluster software to do it automatically. Any solutions ? (instead of recommending me amazon s3) S

    Read the article

  • Why is it a bad idea to use a customer email as the from address

    - by Crab Bucket
    I've got an application that emails users once they have filled in a form. It uses a no-reply@customerdomain.com as a from address. The customer wants it to use the email from the form as the from address which could be anything. I have been told that this is a bad idea due to spoofing/blacklisting and spam. I feel really vague about the exact reason about why this is a bad idea particularly as i've got to try to counsel the client out of this. Can someone explain to me why this is a bad idea. Interestingly the client has used a gmail account as the from address as a demo which not only works fine but has enabled the application to start sending emails (it wouldn't do it before with an email which was [email protected]). Erm - what is going on. I'm told one thing and the opposite works. Sorry - i know this is basic but I could find anything on a google search. Largely I think because I'm having trouble even framing the question. EDIT Thank you everyone - great answers. Interestingly the server sending the email and the mail box that it is going to are both behind the same firewall so the client says they are unconcerned about spam. Oh well.

    Read the article

  • Removing Paths/ Landing Pages From SharePoint Search Results

    - by j.strugnell
    Hi there, We've been asked by a client to remove a number of pages from being shown up in their public website search results page. I've been into the SSP and created Crawl Rules to remove these pages. All seemed to have worked ok but we have an issue in that landing pages are still showing up in their "www.domain.com/sitearea/" form but not in their "www.domain.com/sitearea/pages/default.aspx". For each of this type of page we have created one rule to "Exclude" the "aspx" path and another rule to include the "/" path but to "Follow links on the URL without crawling the URL itself". We tried adding rules to exclude the "/" format but that only resulted in all results underneath that being excluded. Does anybody know how to remove the "area/pages/default.aspx" and the "area/" pats from Search Results? I'm not sure if it's the "done thing" to ask 2 questions in one but this is in a similar vein so it should be ok. I was wondering if anyone knew of a tool (or if it is possible) to allow site admins to exclude pages from search results (not via SSP/Crawl Rules). I know they can do it at the site level but I was wondering if anything out there enabled this to be done at the page level through either Page or Site Settings?

    Read the article

  • TCP dies on a Linux laptop

    - by Roman Cheplyaka
    Once in several days I have the following problem. My laptop (Debian GNU/Linux testing) suddenly becomes unable to work with TCP connections to the internet. The following things continue to work fine: UDP (DNS), ICMP (ping) — I get instant response TCP connections to other machines in the local network (e.g. I can ssh to a neighbour laptop) everything is ok for other machines in my LAN But when I try TCP connections from my laptop, they time out (no response to SYN packets). Here's a typical curl output: % curl -v google.com * About to connect() to google.com port 80 (#0) * Trying 173.194.39.105... * Connection timed out * Trying 173.194.39.110... * Connection timed out * Trying 173.194.39.97... * Connection timed out * Trying 173.194.39.102... * Timeout * Trying 173.194.39.98... * Timeout * Trying 173.194.39.96... * Timeout * Trying 173.194.39.103... * Timeout * Trying 173.194.39.99... * Timeout * Trying 173.194.39.101... * Timeout * Trying 173.194.39.104... * Timeout * Trying 173.194.39.100... * Timeout * Trying 2a00:1450:400d:803::1009... * Failed to connect to 2a00:1450:400d:803::1009: Network is unreachable * Success * couldn't connect to host * Closing connection #0 curl: (7) Failed to connect to 2a00:1450:400d:803::1009: Network is unreachable Restarting the connection and/or reloading the network card kernel module doesn't help. The only thing that helps is reboot. Clearly something is wrong with my system (everything else works fine), but I have no idea what exactly. I don't know how to reproduce this, but as I said, it happens every several days. My setup is a wireless router that is connected to the ISP via PPPoE. Any advice?

    Read the article

  • Users are getting a temporary profile

    - by Serhiy
    A bit about current setup: It is windows 2008 R2 AD servers (all of them are 2008R2) and couple locations which set as Sites. Each location has DFS on AD server. Roaming profiles are not used nor configured. Users have their home folder configured as mapped S: drive to DFS shared folder. For example: in profile tab user has: Home Folder - connect - S: to \\domain.com\dc\users\%username% We also have redirected Desktop, Documents and Downloads folders to \\domain.com\dc\users. Everything was fine. Suddenly (today), users in most locations lost their local profile (both XP and W7 desktops) and got temporary profiles. Also, it looks like local profile was created today (from folder properties). I checked events at couple machines and there is not errors related to profiles or logon process. I do not see issues in event logs at servers as well. Basically, I run out of ideas what is wrong and why machines lost their local profiles. PS: Laptop users do not have their folders redirected, but lost profiles as well.

    Read the article

  • How to get http requests details in a tcpdump?

    - by tucson
    I am trying to get a tcpdump trace of some http requests. Here is what I got so far (I replaced the real IP addresses with REMOTE and LOCAL): C:\>Windump -na -i 3 ip host REMOTE and ip src LOCAL and tcp port 80 Windump: listening on \Device\NPF_{8056BE5E-BDBB-44E6-B492-9274B410AD66} 13:13:34.985460 IP LOCAL.4261 > REMOTE.80: . 1784894764:1784894765(1) ack 1268208398 win 65535 13:13:38.589175 IP LOCAL.4302 > REMOTE.80: F 3708464308:3708464308(0) ack 982485614 win 65535 13:13:38.589285 IP LOCAL.4303 > REMOTE.80: F 890175362:890175362(0) ack 2462862919 win 65535 13:13:38.589330 IP LOCAL.4304 > REMOTE.80: F 1838079178:1838079178(0) ack 156173959 win 65535 13:13:38.589374 IP LOCAL.4305 > REMOTE.80: F 3952718843:3952718843(0) ack 2209231545 win 65535 13:13:38.589413 IP LOCAL.4306 > REMOTE.80: F 446105750:446105750(0) ack 3141849979 win 65535 13:13:38.590265 IP LOCAL.4302 > REMOTE.80: . ack 2 win 65535 13:13:38.590403 IP LOCAL.4304 > REMOTE.80: . ack 2 win 65535 13:13:38.590429 IP LOCAL.4303 > REMOTE.80: . ack 2 win 65535 13:13:38.590484 IP LOCAL.4305 > REMOTE.80: . ack 2 win 65535 13:13:38.590514 IP LOCAL.4306 > REMOTE.80: . ack 2 win 65535 But I do not get the following level of details: Request URL:http://domain.com/index.php Request Method:POST Status Code:200 OK POST /index.php HTTP/1.1 Host: domain.com Connection: keep-alive Content-Length: 151 Cache-Control: max-age=0 etc How can I get this level of data?

    Read the article

  • SVN client too old after I used sshfs

    - by johnlai2004
    I have a svn checked out web application on a shared hosting linux server. The linux server has an svn client that I can access via ssh. From my localhost, I did the following > sshfs [email protected]:webappdir/ /media/webapp > cd /media/webapp > svn update svn: Working copy '.' locked svn: run 'svn cleanup' to remove locks (type 'svn help cleanup' for details) > svn clean up It's been 15 minutes and the svn clean up still isn't finished. I think it may have froze. So then I did the following: > ssh myusername@sharedhostingserver.com > cd webappdir > svn update svn: This client is too old to work with working copy '.'; please get a newer Subversion client So now I can't update my webappdir because my /media/webapp is stuck on svn clean up, and my shared hosting server's svn client is out of date. I don't have privileges to install a new svn client on the shared hosting server. How do I get my svn update to work?

    Read the article

  • Postfix : outgoing mail in TLS for a specific domain

    - by vercetty92
    I am trying to configure postfix to send mail in TLS (starttls in fact), but only for a specific destination. I tried with "smtp_tls_policy_maps". This is the only line in my main.cf file regarding TLS configuration, but it seems not working. Here is my main.cf file: queue_directory = /opt/csw/var/spool/postfix command_directory = /opt/csw/sbin daemon_directory = /opt/csw/libexec/postfix html_directory = /opt/csw/share/doc/postfix/html manpage_directory = /opt/csw/share/man sample_directory = /opt/csw/share/doc/postfix/samples readme_directory = /opt/csw/share/doc/postfix/README_FILES mail_spool_directory = /var/spool/mail sendmail_path = /opt/csw/sbin/sendmail newaliases_path = /opt/csw/bin/newaliases mailq_path = /opt/csw/bin/mailq mail_owner = postfix setgid_group = postdrop mydomain = ullink.net myorigin = $myhostname mydestination = $myhostname, localhost.$mydomain, localhost masquerade_domains = vercetty92.net alias_maps = dbm:/etc/opt/csw/postfix/aliases alias_database = dbm:/etc/opt/csw/postfix/aliases transport_maps = dbm:/etc/opt/csw/postfix/transport smtp_tls_policy_maps = dbm:/etc/opt/csw/postfix/tls_policy inet_interfaces = all unknown_local_recipient_reject_code = 550 relayhost = smtpd_banner = $myhostname ESMTP $mail_name debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin xxgdb $daemon_directory/$process_name $process_id & sleep 5 And here is my "tls_policy" file: gmail.com encrypt protocols=SSLv3:TLSv1 ciphers=high I also tried gmail.com encrypt My wish is to use TLS only for the gmail domain. With this configuration, I don't see any TLS line in the source of the mail. But if I tell postfix to use TLS if possible for all destination with this line, it works: smtp_tls_security_level = may Beause I can see this line in the source of my mail: (version=TLSv1/SSLv3 cipher=OTHER); But I don't want to try to use TLS for the others domains...only for gmail... Do I miss something in my conf? (I also try whith "hash:/etc/opt/csw/postfix/tls_policy", and it's the same) Thanks a lot in advance

    Read the article

  • Apache, logerror and logrotate: what is the best method?

    - by OlivierDofus
    Hi! Here's a vhost example of my sites: <VirtualHost *:80> DocumentRoot /datas/web/woog ServerName woog.com ServerAlias www.woog.com ErrorLog "|/httpd-2.2.8/bin/rotatelogs /logs/woog/error_log 86400" CustomLog "|/httpd-2.2.8/bin/rotatelogs /logs/woog/access_log 86400" combined DirectoryIndex index.php index.htm <Location /> Allow from All </Location> <Directory /*> Options FollowSymLinks AllowOverride Limit AuthConfig </Directory> </VirtualHost> I've got 12 sites running now. This gives something like: [Shake]:/sources/software/mod_log_rotate# ps x | grep rotate /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/error_log 86400 /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/error_log 86400 [snap (as many error_log as virtual hosts)] /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/access_log 86400 /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/access_log 86400 [snap (as many access_log as virtual hosts)] grep rotate [Shake]:/sources/software/mod_log_rotate# !!! I've been looking everywhere but I've only found mod_log_rotate. The "little" problem is that the author (very good C developper) explains: "Unfortunately Apache error logs are handled in such a way that we can't work the same log rotation magic on them. Like transfer logs they support piped logging though so you can still use rotatelogs for them. " So my question is: what would be the best way to handle multiple logs? If I just do a very classical log and I use the system's "logrotate" program couldn't this be a good deal? How would/do you deal with that? Thank you!

    Read the article

  • Users getting 'flooded' with not read notifications (NRNs) for old emails and meeting requests

    - by Exile
    I'm being placed under quite a lot of pressure from senior management over a relatively trivial issue. Basically the vast majority of users are complaining that they receive not read notifications (NRNs) for old emails and meeting requests in large numbers multiple times a day. I know something strange is happening because some are delivered at silly times in the morning (i.e 3AM or 4AM). The problem I have is that these some of these NRNs are from meeting requests and messages that are 120 days old, so some users have deleted the original message so I don’t actually know if the NRN is from an email or meeting request. This is typical of what users receive as a NRN: From: Sender Sent: 23 March 2012 04:16 To: Recepient Subject: Not read: Accepted: Status update Your message To: Sender Subject: Accepted: Status update Sent: Wednesday, November 23, 2011 8:59:00 AM (UTC) Dublin, Edinburgh, Lisbon, London was deleted without being read on Friday, March 23, 2012 4:15:32 AM (UTC) Dublin, Edinburgh, Lisbon, London. ... From: Sender Sent: 18 March 2012 01:13 To: Recepient Subject: Not read: Gold delivery - Sourcing module Your message To: Sender Subject: Gold delivery - Sourcing module Sent: Friday, November 18, 2011 9:37:58 AM (UTC) Dublin, Edinburgh, Lisbon, London was deleted without being read on Sunday, March 18, 2012 1:12:37 AM (UTC) Dublin, Edinburgh, Lisbon, London. I have done a search and found the following: http://support.microsoft.com/kb/2544246 http://support.microsoft.com/kb/2471964 But we already installed 'Update Rollup 6 for Exchange Server 2010 Service Pack 1' back in December, so I am not sure what we can do to fix this?

    Read the article

  • OpenVPN + iptables / NAT routing

    - by Mikeage
    Hi, I'm trying to set up an OpenVPN VPN, which will carry some (but not all) traffic from the clients to the internet via the OpenVPN server. My OpenVPN server has a public IP on eth0, and is using tap0 to create a local network, 192.168.2.x. I have a client which connects from local IP 192.168.1.101 and gets VPN IP 192.168.2.3. On the server, I ran: iptables -A INPUT -i tap+ -j ACCEPT iptables -A FORWARD -i tap+ -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE On the client, the default remains to route via 192.168.1.1. In order to point it to 192.168.2.1 for HTTP, I ran ip rule add fwmark 0x50 table 200 ip route add table 200 default via 192.168.2.1 iptables -t mangle -A OUTPUT -j MARK -p tcp --dport 80 --set-mark 80 Now, if I try accessing a website on the client (say, wget google.com), it just hangs there. On the server, I can see $ sudo tcpdump -n -i tap0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap0, link-type EN10MB (Ethernet), capture size 96 bytes 05:39:07.928358 IP 192.168.1.101.34941 > 74.125.67.100.80: S 4254520618:4254520618(0) win 5840 <mss 1334,sackOK,timestamp 558838 0,nop,wscale 5> 05:39:10.751921 IP 192.168.1.101.34941 > 74.125.67.100.80: S 4254520618:4254520618(0) win 5840 <mss 1334,sackOK,timestamp 559588 0,nop,wscale 5> Where 74.125.67.100 is the IP it gets for google.com . Why isn't the MASQUERADE working? More precisely, I see that the source showing up as 192.168.1.101 -- shouldn't there be something to indicate that it came from the VPN? Edit: Some routes [from the client] $ ip route show table main 192.168.2.0/24 dev tap0 proto kernel scope link src 192.168.2.4 192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.101 metric 2 169.254.0.0/16 dev wlan0 scope link metric 1000 default via 192.168.1.1 dev wlan0 proto static $ ip route show table 200 default via 192.168.2.1 dev tap0

    Read the article

  • IIS 7.0 rewrite url problem

    - by Jouni Pekkola
    Hello, How i can set redirect url for virtual directory in iis 7.0.I have installed lates url rewrite module 2. ? I could explain my problem with exsample. I have website on my iis 7.0 server: www.mysite.com I desided to create virtual directory sales under my site which is pointing to website root directory.Now I need create redirect url for the vdir. The vdir is pointing same virtual root directory as my site root is The big idea is that i can write on browser www.mysite/sales and i will automaticly redirect to url www.mysite.com?productid=200. I tried to make redirect with rewite url for vdir(not website), but I always get this error message : cannot add duplicate colletion entry of type 'rule' with unique key key attribute 'name' set to "test".This happens when i am pointing for virtual vdir and try to add rule. I can add rules to website level,but rules doesn work. I mean url www.mysite/sales gives me follwing error. I know that key is unique I checked it from web.config. This kind of feature was really easy use in IIS 6.0, just point vdir with your mouse and set properties--a redirect to url. Please some one explain what is right way to do it in IIS 7.0

    Read the article

  • Not able to connect to port different than 22 - OpenVPN

    - by t8h7gu
    I have OpenVPN network with 5 clients. Computer with Arch Linux which hosts OpenVPN server, It also hosts virtual machine with Computer with CentOS which is also connnected to OpenVPN subnet. Windows 8 which hosts virtual machine with CentOS. Both of them are connected to OpenVPN. Last one machine is virtual machine with CentOS which is hosted by computer with Ubuntu 14( which is not connected to OpenVPN. All machines in OpenVPN subnet are bolded. All phisical computers are in different networks. The problem is that when I use nmap to scan Windows and it's guest virtual machine it's saids that host seems down. When I force namp to scan specific port it shows filtered state: nmap -Pn -p 50010 n3 Starting Nmap 6.46 ( http://nmap.org ) at 2014-06-07 19:49 CEST Nmap scan report for n3 (10.8.0.3) Host is up (0.11s latency). rDNS record for 10.8.0.3: node3.com PORT STATE SERVICE 50010/tcp filtered unknown Telnet also cannot connect to this port telnet n3 50010 Trying 10.8.0.3... telnet: Unable to connect to remote host: No route to host But ss on this host show's proper state of this port ss -anp | grep 50010 LISTEN 0 50 10.8.0.3:50010 *:* users:(("java",12310,271)) What might be possible reason of that and how to fix it? EDIT I've found that I am able to connect via telnet to ssh port: telnet n3 22 Trying 10.8.0.3... Connected to n3. Escape character is '^]'. SSH-2.0-OpenSSH_5.3 So it seems that it's not problem with Windows firewall. But I have no idea what it might be. Also nmap result for first thousand ports: nmap -Pn -p 1-1000 n3 Starting Nmap 6.46 ( http://nmap.org ) at 2014-06-07 20:08 CEST Nmap scan report for n3 (10.8.0.3) Host is up (0.49s latency). rDNS record for 10.8.0.3: node3.com Not shown: 999 filtered ports PORT STATE SERVICE 22/tcp open ssh Nmap done: 1 IP address (1 host up) scanned in 77.87 seconds

    Read the article

  • Nokia E75 Mail for Exchange

    - by Sebastian
    Hi, I have a SBS2003 runing Exchange Server 2003 SP2. My OWA has a godaddy certificate valid for 3 years to come installed. HTTPS works fine for OWA. The certificate has also been copied into the Nokia E95 I am trying to syncronize my Nokia E75 via Mail for Exchange to my mail account on the Exchange server. These are the steps i use: Menu Email New Start Select Internet Gateway Than i enter the details: myemail@mydomain.com I select company email Mail for Exchange In the domain menu i enter : mydomain In the username/password menu i enter : myusername/mypassword In the server menu i enter : mail.mydomain.com (where the DNS resolves into the server's IP address) In the secure access i select : Internet / Secure / 443 NOTE : port 443 has been opened on my SBOX and forwarded to the exchange server. On IIS default website properties directory security secure communications edit the "Require Secure Channel SSL" is enabled. However, when i try to sync my phone i get the following error code: * Mail for Exch permissions illegal. Check permission configuration. * The phone log gives the following information : Username or Password Illegal. Correct Username and/or Password in the profile options. I've tried speaking with the Phone service support but they cannot identify the problem. Any help will be much apreciated.

    Read the article

  • Seperate external and intranet portals using the same functions .htaccess

    - by jezzipin
    We are currently struggling with setting up rules for a .htaccess file for a website built upon our company product. The product is built using PLSQL and procedures can be accessed using URLs. We use this functionality to present different options to our users. These options can be injected into HTML pages using replacement tags. So, the tag [user_menu] is always replaced with: /wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for external sites and /intranet/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for internal sites. The issue we are having is twofold. We need to write our .htaccess rules so that the user can access the functionality whether they are internal or external. So, the links should work as follows: http://www.example.com/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} or http://www.example.com/internal/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} This is the other problem. As you can see for the internal link above, the procedure needs to be prefixed with internal instead or intranet. We cannot change this in our standard tags as this will affect other sites so we need to achieve this also using htaccess. Could anyone assist with this issue? I apologise if this is brief or confusing but it's something i've never done before and have been given the task of doing. I apologise for the lack of code that will be posted above however I am a front end developer and have been left to make these changes having no prior experience of .htaccess to please bare with me.

    Read the article

  • SPF for two different outgoing servers?

    - by Marcus
    I have ran into a problem that I think someone should have a really clever answer for. Today we have our own mailserver that looks like "mail.domain.com" – which we use to send out mail to our customers (with a modified PHPMailer script). Usually around 5000 mails every day. Everything from customer support to invoices goes through there. The from-header is set to "customer-service@domain.com". We are now thinking of migrating to Google Apps for internal use (with 70+ users). However, we cannot use Gmails SMTP for sending "bulk" mails (they have a limit of 500 outgoing mails per day) so we really want to keep using our current system for sending automated mail to our customers – and using gmails SMTP for our internal use. So, how do we set up our SPF-records (Sender Policy Framework) for this? We do not want to get stuck in any filters for "spoofing" the sender from either type of account (the ones sent from our own server, and through Gmails). In short: we want to be able to use the same e-mail adress (for sending) on two different SMTP servers (and therefore two different IP-adresses). Anyone with a good knowledge off SPF who knows how to go about? Or if it is even possible? Anything else I should think of when switching to Google Apps?

    Read the article

  • Attempting to emulate Apache MultiViews with Nginx try_files

    - by Samuel Bierwagen
    I want a request to http://example.com/foobar to return http://example.com/foobar.jpg. (Or .gif, .html, .whatever) This is trivial to do with Apache MultiViews, and it seems like it would be equally easy in Nginx. This question seems to imply that it'd be easy as try_files $uri $uri/ index.php; in the location block, but that doesn't work. try_files $uri $uri/ =404; doesn't work, nor does try_files $uri =404; or try_files $uri.* =404; Moving it between my location / { block and the regexp which matches images has no effect. Crucially, try_files $uri.jpg =404; does work, but only for .jpg files, and it throws a configuration error if I use more than one try_files rule in a location block! The current server { block: server { listen 80; server_name example.org www.example.org; access_log /var/log/nginx/vhosts.access.log; root /srv/www/vhosts/example; location / { root /srv/www/vhosts/example; } location ~* \.(?:ico|css|js|gif|jpe?g|es|png)$ { expires max; add_header Cache-Control public; try_files $uri =404; } } Nginx version is 1.1.14.

    Read the article

  • Exchange 2010 update timezone of all calendar items

    - by Andrew
    We are currently operating Exchange 2010 server with Outlook 2010 clients on a ship. We have just changed timezones for the first time in quite a while today. Is there any way to rebase all the calendars and/or update all the calendar items to the new timezone at the same time? I have looked at the following tools already. Microsoft Exchange Calendar Update Configuration Tool - http://www.microsoft.com/en-us/download/details.aspx?id=6266 (Doesn't support exchange 2010) Time Zone Data Update Tool for Microsoft Office Outlook - http://www.microsoft.com/en-us/download/details.aspx?id=17291 The Time Zone Data Update Tool for Microsoft Office Outlook does work for individual users, but has some serious downsides. Including each user needs to run it (approx 400 users), and also it only seems to work on the default account in Outlook 2010, a lot of our users have role accounts as well that we would need to run the tool on. The only way I can find to get this tool to run on the role accounts is to make the role account the default account in outlook, and that in itself is quiet an involved process especially if you have 2 or 3 role accounts. So is there a way to just change all calendar items on our Exchange server to a different timezone in one go? We are a little unique in terms of the whole organisation can change timezones over night, meeting rooms and all, but surely a product as advanced as Exchange 2010 allows us to do what we need.

    Read the article

  • Changing the current URL but serving content from another (same domain) - ProxyPass?

    - by zigojacko
    I've been banging my head against the wall with this for months now so I hope someone on here will be able to finally advise what is needed for this. I have some URL's like this:- domain.com/category/subcat/filter/brand And I wish to rewrite the URL's to:- domain.com/category/brand-subcat Content loads fine at the first URL, I just want to show it at a different URL - is URL masking the correct term for this? I have a RewriteRule in .htaccess that should do this job as far as I believe:- RewriteRule ^([a-zA-Z]+)/([a-zA-Z]+)/filter/([a-zA-Z]+)$ $1/$3-$2 This isn't actually modifying the URL at all though on a Magento website (mod_rewrite is enabled and plenty of other rewrites are working from the same .htaccess). So firstly, I want to know is what I am trying to achieve definitely possible? If so, what is this process even called? Secondly, does this need to be handled using ProxyPass and then use a [P] flag with the rewrite rule? I assume the Apache server doesn't have mod_proxy enabled currently because when I add a [P] flag, the URL returns a 403 forbidden error with the full server path for the current URL. Please could anyone kindly advise what on earth I need to do to achieve this?

    Read the article

  • RAID-capable 3.5" SATA Drives

    - by nroam
    I recently purchased a pair of 1TB Western Digital WD1002FBYS RE3 drives for use in an external RAID enclosure. I have found that they tend to drop out of the array after a while. Thinking it was the enclosure I tried them on another one but found the same issue. So a bit of googling and I found http://www.tomshardware.com/forum/251076-32-raid-issues-western-digital-hard-disk which suggests that: "WD's "RE" (RAID Edition) HDDs support Time-Limited Error Recovery ("TLER" ): http://www.wdc.com/en/products/productcatalog.asp?language=en As a non-TLER HDD fills up with data, the error detection firmware might take too long, and the RAID controller may drop that HDD from a RAID array." So now I wonder what SATA drives have firmware which is compatible with RAID arrays (esp. RAID 1, 5, but not 0)? I have not been able to come up with the magic set of keywords to ellicit the answer from Google. However, various sites suggest that Seagate & Hitachi are in general OK. Does anyone have any generic (or even specific) guidance on how to work out if a drive's firmware may harbour code that is potentially an issue in a RAID0 setting other than stating that it must be 'enterprise' ready?

    Read the article

  • Industrial strength cloud file storage

    - by ArthurG
    I'm looking for an industrial strength cloud file storage system. It will be used by multiple people in a startup. Our requirements: Transparent file system access: files and folders in the file system must be able transparently access (read and write) files in the cloud; files must be synchronized whenever network access is available and buffered otherwise. The system must be usable by non-technical people. Access control: we need to control who can access which files, at least on a very coarse basis. e.g., the developers will be able to access the system design documents, only the corporate folks can access recruiting documents, and only management can access certain corporate documents. Dropbox provides this via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. so the cloud service should have a notion of an account (our startup) with multiple users with distinct credentials and rights for each user Clients: it must be accessible from Macs and PCs; I would hope that it supports Linux (e.g., Ubuntu) too Security: it must provide robust security Backup: the cloud service must reliably backup the files Versioning: change version history, is a big plus, but not required Not free: we're willing to pay for the service So far, we've reviewed the following, albeit not completely thoroughly: Dropbox: has all except 1) Access control, which is provided via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. and 2) Security, as discussed here http://www.economist.com/blogs/babbage/2011/05/internet_security and here http://blog.dropbox.com/?p=821. Windows Live Mesh, has all except 1) Clients, only supporting Windows 7 and OS X. SpiderOak has all, except 1) Transparent file system access, which is only available for 1 user. Amazon Cloud, doesn't offer 1) Transparent file system access Rackspace Cloud Drive has all except 1) Access control and 2) Versioning I'll gladly include any clarifications or additional systems the community provides. Arthur

    Read the article

  • Can't access site internally, but DNS works

    - by BloodyIron
    1) I have apache2 running a vhost for a website. 2) This apache2 instance is already successfuly setup for other websites on it to be accessible internally and externally. 3) I am using an internal bind9 server to resolve the new website's domain internally to the private IP. This bind9 server is not public facing, nor is it the master server on the internet. 4) The DNS internally resolves to the right IP. 5) Firefox reports "server not found". 6) I have copied the config almost identically to other configs that are known to work (adjusting for proper paths of course). In turn I have reloaded and restarted apache2 repeatedly. 7) I have an entry to forward .org .info .net alternative TLDs to .com in the vhost config for this domain, and my browser goes from .org to .com despite note #5. 8) /var/log/apache2/access.log shows when someone externally tries to access the site, but no activity is observed when someone tries to access internally. Changing the log level does not appear to improve the situation. 9) I am out of ideas, nothing appears to be wrong. Please help? To be explicit. Why is this new site unreachable internally? I would like to clarify on something, even though I have already outlined this. YES I know this system is in a private network. NO it is not going through a router. YES I am using an internal DNS server (bind9) to resolve, and YES it does resolve to the proper internal IP. YES other websites on the same server setup in the same way with internal resolution work right now and have done for a while. Everything for this domain is setup the same as the other working domains as far as I can tell. The other working domains are internally AND externally accessible. This domain I am working with is only currently externally accessible. When I go to it internally firefox tells me "Server not found".

    Read the article

  • IP to IP forwarding with iptables [centos]

    - by FunkyChicken
    I have 2 servers. Server 1 with ip 1.1.1.1 and server 2 with ip 2.2.2.2 My domain example.com points to 1.1.1.1 at the moment, but very soon I'm going to switch to ip 2.2.2.2. I have already setup a low TTL for domain example.com, but some people will still hit the old ip a after I change the ip address of the domain. Now both machines run centos 5.8 with iptables and nginx as a webserver. I want to forward all traffic that still hits server 1.1.1.1 to 2.2.2.2 so there won't be any downtime. Now I found this tutorial: http://www.debuntu.org/how-to-redirecting-network-traffic-a-new-ip-using-iptables but I cannot seem to get it working. I have enabled ip forwarding: echo "1" > /proc/sys/net/ipv4/ip_forward After that I ran these 2 commands: /sbin/iptables -t nat -A PREROUTING -s 1.1.1.1 -p tcp --dport 80 -j DNAT --to-destination 2.2.2.2:80 /sbin/iptables -t nat -A POSTROUTING -j MASQUERADE But when I load http://1.1.1.1 in my browser, I still get the pages hosted on 1.1.1.1 and not the content from 2.2.2.2. What am I doing wrong?

    Read the article

  • 'The rpc server is unavailable' or 'access is denied' error when using Remote desktop Services Manager on Windows 7 (but mstsc.exe works!)

    - by tbone
    I am trying to connect to a Windows XP workstation from a Windows 7 Ultimate workstation using Remote Desktop Services Manager. I am able to do a Remote Desktop (mstsc.exe) session from the Win7 machine to the WinXP machine with no problem at all. When running the Remote Desktops Admin (tsmmc.msc) too on a Windows XP box, I can also connect with no problem. However, when I use the new Remote Desktop Services Manager on Windows 7 and try to connect, I get the error: "The rpc server is unavailable" What could cause this? Has there been some fundamental change in Remote Desktop Services Manager, does it connect in a different way somehow? Update #1 Turned off firewall on the Windows XP box and the "The rpc server is unavailable" error went away; so RDSM seems to be using an entirely new port/connection/service compared to mstsc.exe or the old Remote Desktops Admin tool. Now... after disabling the firewall, I get a new error: Access is Denied. After doing some googling, I found some articles discussing this; basically, the error is very misleading - the actual problem is, if either side of the connection has dual monitors, and they are not both Win7 Ultimate, then you cannot connect using Remote Desktop Services Manager...the reason is, by default it uses the /multimon switch, and this switch requires a certain level of Windows license - and, there seems to be no way of changing this default (if anyone knows of a way to change this default, please post an answer or comment!). Nice going Microsoft. http://social.technet.microsoft.com/Forums/en-US/windowsserver2008r2rds/thread/4d06278f-e0f4-4f8e-a8e1-3697ee967ef4 http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Windows/Windows_7/Q_26225743.html

    Read the article

  • Centos 6.2 postfix install dependency issues

    - by Mishari
    I am administrating a VPS running cPanel and I'm trying to install postfix. Redhat-release says the version is CentOS release 6.2 (Final) and uname -a says: Linux server.mydomain.com 2.6.32-220.el6.i686 #1 SMP Tue Dec 6 16:15:40 GMT 2011 i686 i686 i386 GNU/Linux This is how I'm installing postfix (I had tried to solve the problem earlier by installing epel). # yum install postfix Loaded plugins: fastestmirror, security Loading mirror speeds from cached hostfile * epel: mirror.cogentco.com Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package postfix.i686 2:2.6.6-2.2.el6_1 will be installed --> Processing Dependency: mysql-libs for package: 2:postfix-2.6.6-2.2.el6_1.i686 --> Finished Dependency Resolution Error: Package: 2:postfix-2.6.6-2.2.el6_1.i686 (centos-burstnet) Requires: mysql-libs You could try using --skip-broken to work around the problem Attempts to install mysql-libs tells me several files conflict with "MySQL-server-5.1.61-0.glibc23.i386" I'm not sure why or how this is happening, does anyone know how to resolve this? Surely Centos 6.2 could not have shipped with a broken postfix.

    Read the article

< Previous Page | 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812  | Next Page >