Search Results

Search found 57132 results on 2286 pages for 'angel ortiz@oracle com'.

Page 954/2286 | < Previous Page | 950 951 952 953 954 955 956 957 958 959 960 961  | Next Page >

  • Can't access dfs namespace over vpn

    - by cpf
    Hi Serverfault, I've recently configured 2 servers in AD on the same domain level. They are physically separated and permanently connected through a site-to-site vpn for dfs replication. All well, but when users connect to either site through vpn (from home e.g.) they can't use the domain level method: \\domain.com\data Internally this works perfectly, resolving domain.com when connected through vpn gets the correct IP. I've tried Google to figure things out. What I was able to find was that more people have this issue, no real solution found though. Can anyone explain why this is happening? Especially a solution would be really helpful! Thanks in advance.

    Read the article

  • Tool/Program/Script/Formula for deciphering Active Directory Connection Strings for 3rd party user i

    - by I.T. Support
    We're using WSFTP, which has an Active Directory Integration module. To populate the user accounts you need to provide a connection string akin to: OU=Users,DC=domain,DC=com CN=Domain Users,OU=Users,DC=domain,DC=com Questions: Is there a Tool/Program/Script/Formula that allows me to decipher how these strings might look based on what I can see in Active Directory Users & Computers? Is there a proper/accepted name for these types of connection strings? I don't even know what to Google to get more information about how to format one properly How would I troubleshoot the connection string if I think it looks correctly formatted, but it isn't working? Thanks!

    Read the article

  • Block site on a PC logged into a domain and using a proxy

    - by Rauf
    I read lot of posts related with blocking sites. Most of the posts says to edit hosts file. I know it is a good method. But this one is not working for me. Can you guess what is the issue by analyzing the following details, My PC is joined to a domain and using proxy settings, and the logged in user having administrator privileges. After reading some answers, I did the following Changed the hosts file to have # 38.25.63.10 x.acme.com # x client host 127.0.0.1 localhost 127.0.0.1 www.facebook.com Added no proxy for facebook, Still, it is not working. Why ?

    Read the article

  • Mailman bounces go to me, do not unsubscribe user! (MacOS Server)

    - by vy32
    I run a large mailing list (30,000 users). Every month I get a whole slew of bounces in my mailbox from the "mailing list memberships reminder" that get sent out. I thought that these were supposed to go to the program and automatically unsubscribe people, not go to me. We are running a standard unmodified MacOS Server installation. Here is a typical bounce: From: Mail Delivery System Subject: Undelivered Mail Returned to Sender To: [email protected].COM This is the mail system at host mail.COMPANY.COM. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to postmaster. If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system ... Any idea what to do?

    Read the article

  • Setting up a externally facing server on Windows. How do i setup DNS/Nameservers?

    - by Jason Miesionczek
    So i have a domain name that i would like to host from my static ip internet connection. I have windows server 2008 r2 installed, and dns setup. The dns server is currently behind a firewall, and i have the appropriate rules to allow traffic to reach it. My question is, what entries do i need to create in the DNS so that i can have some nameservers to use at my domain registrar, so that the domain correctly points to the server? I know that most domains have nameservers like ns1.domain.com, ns2.domain.com, etc. What would i point those to in my DNS?

    Read the article

  • How to use wget to grab copy of Google Code site documents?

    - by Alex Reynolds
    I have a Google Code project which has a lot of wiki'ed documentation. I would like to create a copy of this documentation for offline browsing. I would like to use wget or a similar utility. I have tried the following: $ wget --no-parent \ --recursive \ --page-requisites \ --html-extension \ --base="http://code.google.com/p/myProject/" \ "http://code.google.com/p/myProject/" The problem is that links from within the mirrored copy have links like: file:///p/myProject/documentName This renaming of links in this way causes 404 (not found) errors, since the links point to nowhere valid on the filesystem. What options should I use instead with wget, so that I can make a local copy of the site's documentation and other pages?

    Read the article

  • How to make nginx only respond to one domain?

    - by larryzhao
    I am pretty new to nginx, I host my rails application on nginx+passenger. I want my website to be accessible to only one domain. So I set my nginx conf like the following: server { listen 80; server_name mydomain.com www.mydomain.com; root /var/deploy/myapp/current/public; passenger_enabled on; location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 1y; add_header Cache-Control public; } } I specify the server_name directive, but still, it answers anything which points to this IP and I could see that in the access.log that it answers to other domain names. Is there anything I am doing wrong?

    Read the article

  • Recovering an old website

    - by noah
    I have a client with an old website that somebody setup for him long ago. The guy who set it up is unreachable, so how do we go about trying to take it over? A WHOIS lookup got us some contact information, but I don't have great hopes for that (it hasn't been update in quite some time). The nameservers are ns1.theplanet.com and ns2.theplanet.com, and we will try calling them, but I don't expect we'll be able to get much from them. What are our options? Is there a way I can discover the registrar so we can try contacting them as well? EDIT: It would be sufficient if we could get control of the domain name or put in some sort of redirect to the new site. Either hosting was prepaid for quite some time, or someone else is still paying for it, so we don't care about that.

    Read the article

  • Redirect to new domain and preserve username

    - by David Brown
    I recently switched to a new domain for a version control server I run. The server is usually accessed with a username included in the url such as https://[email protected]/some/stuff. I want to redirect requests to the old domain to the new domain and preserve everything else in the url (including the username). So the former url would be redirected to https://[email protected]/some/stuff. Currently I have the following rewrite condition and rule: RewriteCond %{HTTP_HOST} sub\.olddomain\.com RewriteRule (.*) https://sub.newdomain.net$1 [R=301,L] This works except it drops the userinfo part of the URL. Is there a way I can preserve the user info?

    Read the article

  • Cron Jobs unable to deliver email error report

    - by root
    I am sure I have right syntax and I am still unable to receive report emails to my email address. My OS is CentOS 6.4. My Crontab script is MAILTO="cron@mydomain.com" * * * * * /usr/bin/php5 /home/myusername/public_html/cron.php /post/find_submit_test/1/ Email address cron@mydomain.com is working fine and I tested sendmail from ssh which is too working fine but cron reports are unable to be delivered. I checked WHM for notification settings couldn't even find anything relevant there. Please advice me how to fix this. Thanks

    Read the article

  • How do I analyze an Apache Bench result?

    - by Alan Hoffmeister
    I need some help with analyzing a log from Apache Bench: Benchmarking texteli.com (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Server Hostname: texteli.com Server Port: 80 Document Path: /4f84b59c557eb79321000dfa Document Length: 13400 bytes Concurrency Level: 200 Time taken for tests: 37.030 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 13524000 bytes HTML transferred: 13400000 bytes Requests per second: 27.01 [#/sec] (mean) Time per request: 7406.024 [ms] (mean) Time per request: 37.030 [ms] (mean, across all concurrent requests) Transfer rate: 356.66 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 27 37 19.5 34 319 Processing: 80 6273 1673.7 6907 8987 Waiting: 47 3436 2085.2 3345 8856 Total: 115 6310 1675.8 6940 9022 Percentage of the requests served within a certain time (ms) 50% 6940 66% 6968 75% 6988 80% 7007 90% 7025 95% 7078 98% 8410 99% 8876 100% 9022 (longest request) What this results can tell me? Isn't 27 rps too slow?

    Read the article

  • nginx connection reset

    - by Steve
    When first visiting my site after not visiting it for a few minutes, the connection is "reset" 100% of the time. I get this message when debug is turned on, along with a 400 bad request status message: client prematurely closed connection while reading client request line I've read that this could be caused by large_client_header_buffers setting. I have google analytics on my site. Using live http headers, I get this as the request: `GET /__utm.gif?utmwv=5.3.7&utms=35&utmn=745612186&utmhn=domain.com&utmcs=UTF-8&utmsr=1920x1080&utmvp=1841x903&utmsc=24-bit&utmul=en-us&utmje=1&utmfl=11.4%20r402&utmdt=2006Scape%20Forums%20-%20General&utmhid=2004697163&utmr=0&utmp=%2Fservices%2Fforums%2Fboard.ws%3F3%2C4&utmac=UA-25674897-2&utmcc=__utma%3D68455186.1647889527.1351640625.1352446442.1352451659.100%3B%2B__utmz%3D68455186.1352097329.64.2.utmcsr%3Ddomain.com%7Cutmccn%3D(referral)%7Cutmcmd%3Dreferral%7Cutmcct%3D%2Fservices%2Fforums%2Fboard.ws%3B&utmu=q~ HTTP/1.1 my large_client_header_buffers in nginx is set to 4 8k, so I don't know if this is the problem. Immediate requests have the first "reset" request are all successful.

    Read the article

  • Cannot connect to apache web server over internet

    - by user1658093
    I can access my apache 2.2 webserver from the lan ( at this case I use local IP aadress ) but I cannot connect externally ( from another network ). I changed apache to listen port 800, forwarded same port from router control panel, turned off windows and router firewalls. I use whatsmyip.com to get IP with what I try to connect. When I'm trying to connect I use : [whatsmyip.com IP]:800. Also, I can ping server IP externally. OS is windows7. Any ideas, suggestions? Thanks

    Read the article

  • Apache2, can't apply Directory access

    - by skomak
    Hi, i can't figure out how apply deny access to a directory. Here is my config: <VirtualHost x.x.x.x:80> DocumentRoot /var/www/html/wwwhtml ServerName mydomain.com ServerAdmin it@mydomain.com ErrorLog /var/log/httpd/mydomain_error.log TransferLog /var/log/httpd/mydomain_access_log Alias /test /var/www/html/wwwhtml/eventum <Directory /var/www/html/wwwhtml/eventum> Order deny,allow Deny from all #Allow from 192.168.0 </Directory> I deny access to /test but it doesn't work, on my another server it works perfectly :/ Do you know what can cause that problem? How to solve it? It is not whole config but the most important part. Maybe file rewrites can cause it? Thanks in advance.

    Read the article

  • TCP > COM1 for receiving messages and displaying on POS display pole

    - by JakeTheSnake
    I currently have a Java Applet running on my web page that communicates to a display pole via COM1. However since the Java update I can no longer run self-signed Java Applets and I figure it would just be easier to send an AJAX request back to the server and have the server send a response to a TCP port on the computer...the computer would need a TCP COM virtual adapter. How do I install a virtual adapter to go from a TCP port to COM1? I've looked into com0com and that is just confusing as hell to me, and I don't see how to connect any ports to COM1. I've tried tcp2com but it doesn't seem to install the service in Windows 7 x64. I've tried com2tcp and the interface seems like it WOULD work (I haven't tested), but I don't want an app running on the desktop...it needs to be a service that runs in the background. So to summarize how it would work: Web page on comp1 sends AJAX request to server Server sends text response to comp1 on port 999 comp1 has virtual COM port listening on port 999, sends data to COM1 pole displays data

    Read the article

  • Lustre - is this bad form?

    - by ethrbunny
    Im going to be consolidating several 'server rooms' into a single installation soon. Part of this effort will be finding a home for 5Tb (and growing) of files / logs. To this end Im looking at Lustre and appreciating its ability to scale. The big vendors want to sell me a $20K SAN to manage this but Im wondering about buying several iSCSI units (like this http://www.asacomputers.com/3U-iSCSI-Solution.html) and using VMs for the OSS machines. This would let me fail-over to cover problems and not require a dedicated system for each OSS. Given articles like this (http://h30565.www3.hp.com/t5/Feature-Articles/RAID-Is-Dead-Long-Live-RAID/ba-p/1422) that talk about how RAID is not keeping up with drive density Im leaning towards more disks with lower capacity each. Again - some akin to the iSCSI array above. Tell me why this is a terrible idea. Do I really need to invest in a PE710 for each OSS/OST?

    Read the article

  • Wordpress 3 mutli site install

    - by mike
    Hello, Trying to figure out if this is possible... My company has a cms product that was written in Java and we decided to use Wordpress to run blogs for our clients. Obviously, Wordpress does not run on tomcat(at least not by default) so we installed Pound(http://www.apsis.ch/pound/) on our server and have setup any Apache and Tomcat on different ports. When "/blog/" is requested, the request is directed to Apache. This works fine but we would like to use Wordpress multi site so that we can manage all the blogs from a single interface. We would also like the url for every site to be "/blog/" example: http://www.site1.com/blog/ http://www.site2.com/blog/ I'm thinking it would have to be done with apache??? Is it even possible? Thanks!

    Read the article

  • One user sometimes gets an unknown certificate error opening Outlook

    - by Chris
    Let me clarify a little. This isn't an unknown certificate error it's an unknown certificate error in so much as I can't figure out where the certificate comes from. This happens on a Win 7 Enterprise machine connecting to Exchange 2010 with Outlook 2010. The error he gets is that the root is not trusted because it's a self-signed cert. Take a look at this screenshot because even if I had generated this myself I wouldn't have put "SomeOrganizationalUnit" or "SomeCity" or "SomeState", etc. (Red block covers our domain name.) I'm a little concerned this is a symptom of a security breach. Exchange 2010 has three certificates installed but none of them are this certificate. They all have different expiration dates (one is expired) and different meta-data. edit: There are two scenarios that I see the certificate warning and one of them I can reliably repeat. When the user leaves his computer on over night Outlook pops the Security Warning window. I don't know what time this happens. Using Outlook Anywhere if I connect to Exchange externally via a cellular USB modem the Security Warning window will appear every time I close and reopen Outlook. Whether I say Yes or No does not make a difference on whether or not I can connect to Exchange and send/receive email. In other words, I can always connect to Exchange. I've checked my two Exchange servers and my Cisco router for a certificate that matches this one and I can't find it. edit 2: Here is a screenshot of the Security Alert window. (I've been calling it Security Warning... My mistake.) edit 3: I stopped seeing this error several weeks ago but I can't tie it to any single event (because I just sort of realized that warning had stopped showing up) but I think I found the source of the certificate. Last week I found out that the certificate on our website DomainA.com was invalid. I knew that our web admin had installed a valid certificate so when I look into the problem I found out I was being presented with the invalid certificate that this posting is in regards to. The Exchange server's domain is mail.DomainA.com so I can only guess that Outlook was passing this invalid certificate through as it did some kind of check on DomainA.com. This issue is still a mystery because the certificate warning stopped appearing several weeks ago whereas the invalid certificate issue on the website was only fixed last week. It ended up being a problem with the website control panel. The valid certificate was installed but not being served for some reason and instead the self-signed cert was being served.

    Read the article

  • Asterisk doesn't start properly at system startup. DNS lookup fails.

    - by leiflundgren
    When I start my Ubuntu system it attempts two DNS lookups. One to find out what my internet-routers external ip is. And one to find the IP of my PSTN-SIP-provider. Both fails. [Apr 7 22:14:54] WARNING[1675] chan_sip.c: Invalid address for externhost keyword: sip.mydomain.com ... [Apr 7 22:14:54] WARNING[1675] acl.c: Unable to lookup 'sip.myprovider.com' And since the DNS fails it cannot register properly a cannot make outgoing or incoming calls. If I later, after bootup, restart asterisk everything works excelent. Any idea how I should setup things so that either: Delay Asterisk startup so that DNS is up and healthy first. Somehow get Asterisk to re-try the DNS thing later. Regards Leif

    Read the article

  • Serve mirrored (static) web-page with original headers

    - by aioobe
    I have a dynamic webpage which I want to create a "frozen" copy of. Typically I would do something like wget -m http://example.com, and then put the files in the document root of the web-server. This site however has some dynamic content, including dynamically generated images, for instance http://example.com/company/123/logo This means that in order to mirror the page, I need to Save whatever headers the server currently serves for each URL. This can be done using the wget option --save-headers. Serve the static pages and serve the proper headers for each file. (This I have no idea of how to do.) What is the best way to solve this? Any suggestions are welcome.

    Read the article

  • ProFTPd: Multiple Domain VirtualHosts on one IP address

    - by Badger
    I have a webserver that we are giving a consultant FTP access to. For one domain hosted on that server he needs access to a "dev" directory and for a different domain hosted on that server he needs access to a different directory. I am trying to set this up with VirtualHosts, but I am having issues. Here is the VirtualHost bit of my proftpd.conf file: <VirtualHost www.example2.com> ServerName "Example 2" DefaultRoot /var/www/example2/dev </VirtualHost> <VirtualHost www.example1.com> ServerName "Example 1" DefaultServer on DefaultRoot /var/www/example1 </VirtualHost> When I FTP to either domain I always get the first VirtualHost, even if I FTP to the second domain.

    Read the article

  • How to suppress "Not collecting exported resources without storeconfigs"?

    - by Andy Shinn
    I'm getting the following in my Puppet master syslog over and over: Sep 27 11:52:05 puppet1 puppet-master: Not collecting exported resources without storeconfigs Sep 27 11:52:06 puppet1 puppet-master: Not collecting exported resources without storeconfigs Sep 27 11:52:06 puppet1 puppet-master: Not collecting exported resources without storeconfigs I'm not actually using storeconfigs: [ashinn@puppet1 ~]$ cat /etc/puppet/puppet.conf [agent] server = puppet.mydomain.com environment = production report = true [main] logdir = /var/log/puppet vardir = /var/lib/puppet ssldir = /var/lib/puppet/ssl rundir = /var/run/puppet factpath = $vardir/lib/facter pluginsync = true certname = puppet1.mydomain.com [master] modulepath = $confdir/environments/$environment/modules manifest = $confdir/environments/$environment/manifests/site.pp templatedir = $confdir/templates autosign = $confdir/autosign.conf ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY report = true reports = hipchat Any way I can suppress these messages? What do they actually come from?

    Read the article

  • ASA Slow IPSec Performance

    - by Brent
    I have a IPSec link between two sites over ASA 5520s running 8.4(3) and I am getting extremly poor performance when traffic passes over the VPN. CPU on the device is 13%, Memory at 408 MB, and active VPN sessions 2 so the load on the device is particularly low. Screenshot of wireshark file transfer between the two hosts over the VPN: The large amount of Header checksum failures is alarming, but I am not sure what to check now. I perf is showing around 4-5 Mbit/sec with differing TCP window sizes. Show Run on the ASA http://pastebin.com/uKM4Jh76 Show cry accelerator stats http://pastebin.com/xQahnqK3

    Read the article

  • App / protocol to tune into live audio and video based on schedule or subscription

    - by Richard
    Many of us have embraced the podcasting revolution enabled by rss feeds and podcatchers. Alot of sites now broadcast live streams of what is eventually edited into a podcast. In most cases listening to the live stream gets you the info several days sooner then the podcast. So I was wondering if anybody knows of a notification protocol / app that allows me to auto tune into certain streams when they go live, or based on a schedule. I imagine twitter could be used for the notification but It'd be better not to be tied to a proprietary service. Example podcasts / live streams noagenda.squarespace.com jupiterbroadcasting.com twit.tv

    Read the article

  • Transport rule - Exchange 2010

    - by Jeff
    I have two transport rules on my exchange server. One is: > Apply rule to messages: From users that are 'outside the organization' > and when any of the recipients in the To or Cc fields is a member of > 'nointernetmail@company.com' Forward the messageto sender's manager > for moderation The second is: Apply rule to messages from a member of 'nointernetmail@company.com' and sent to users that are 'outside the organization' forward the message to the sender's manager for moderation. nointernetmail is a distribution group, and each user has the managed by set to there local manager. However these transport rules do not work, internet mail is still sent and received without issue. I have read various tutorials / articles of how to do this on sites such as msexchangeblog and even microsoft technet, however even after following the guides I am still unable to have this function properly. Any help is appreciated.

    Read the article

< Previous Page | 950 951 952 953 954 955 956 957 958 959 960 961  | Next Page >