Search Results

Search found 56825 results on 2273 pages for 'mario morgado@oracle com'.

Page 942/2273 | < Previous Page | 938 939 940 941 942 943 944 945 946 947 948 949  | Next Page >

  • Apache2, can't apply Directory access

    - by skomak
    Hi, i can't figure out how apply deny access to a directory. Here is my config: <VirtualHost x.x.x.x:80> DocumentRoot /var/www/html/wwwhtml ServerName mydomain.com ServerAdmin it@mydomain.com ErrorLog /var/log/httpd/mydomain_error.log TransferLog /var/log/httpd/mydomain_access_log Alias /test /var/www/html/wwwhtml/eventum <Directory /var/www/html/wwwhtml/eventum> Order deny,allow Deny from all #Allow from 192.168.0 </Directory> I deny access to /test but it doesn't work, on my another server it works perfectly :/ Do you know what can cause that problem? How to solve it? It is not whole config but the most important part. Maybe file rewrites can cause it? Thanks in advance.

    Read the article

  • Tool/Program/Script/Formula for deciphering Active Directory Connection Strings for 3rd party user i

    - by I.T. Support
    We're using WSFTP, which has an Active Directory Integration module. To populate the user accounts you need to provide a connection string akin to: OU=Users,DC=domain,DC=com CN=Domain Users,OU=Users,DC=domain,DC=com Questions: Is there a Tool/Program/Script/Formula that allows me to decipher how these strings might look based on what I can see in Active Directory Users & Computers? Is there a proper/accepted name for these types of connection strings? I don't even know what to Google to get more information about how to format one properly How would I troubleshoot the connection string if I think it looks correctly formatted, but it isn't working? Thanks!

    Read the article

  • Transferring NS records to a new server

    - by lanemiller
    I feel like that was NOT worded well, but here is my current predicament. I recently had a GoDaddy dedicated server, and decided after their customer support failed to do anything but disappoint, to switch to Rackspace. We have 2 ns records that point to our godaddy server, and we have a few sites left on the server, that rely on it for their DNS zones, and the owners of the domains fail to respond to us. So, the question is, if I need to transfer the sites off of the OLD godaddy NS, can I point the A records from my ns1.domain.com and ns2.domain.com to match up with IP addresses of the Rackspace nameservers? OR, do I cname my NS records to match the rackspace ones? I DO know that this isn't advised, either method, but I need to get these sites moved before Godaddy tries charging another $2k for the server.

    Read the article

  • 500 Internal Server Error after moving Joomla installation to new environment

    - by rad
    (This is the first time I moved the website so please don't be hard on me.) After moving the website, the homepage shows up properly but other pages do not. I get 500 Internal Server Error on all other pages. Before moving, the Search Engine Friendly URLs and Use URL rewriting were enabled in the Joomla Dashboard. Is this the reason the other pages are not showing up? If so, how do I fix this? I think the homepage shows up because the url myWebsite.com redirects to myWebsite.com/index.php automatically. Note that I have transferred all of the Joomla the files through Filezilla and imported the MySQL database properly and also edited the configuration.php as set the proper settings for the database.

    Read the article

  • xen hotplug can't add vif to eth0: operation not supported

    - by John Tate
    I am new to xen and I am trying to get a domU running but I am having problems with it. I think my network card might not support bridging which is bizarre. This is the error I get when trying to create the domU [root@hyrba ~]# xm create sardis.secusrvr.com.cfg Using config file "/etc/xen/sardis.secusrvr.com.cfg". Error: Device 0 (vif) could not be connected. Hotplug scripts not working. All the xen kernel modules are loaded... xen_pciback 52948 0 xen_gntalloc 6807 0 xen_acpi_processor 5390 1 xen_netback 27155 0 [permanent] xen_blkback 21827 0 [permanent] xen_gntdev 10849 1 xen_evtchn 5215 1 xenfs 3326 1 xen_privcmd 4854 16 xenfs I get this error in /var/log/xen/xen-hotplug.log RTNETLINK answers: Operation not supported can't add vif2.0 to bridge eth0: Operation not supported can't add vif2.0-emu to bridge eth0: Operation not supported

    Read the article

  • Setting up a externally facing server on Windows. How do i setup DNS/Nameservers?

    - by Jason Miesionczek
    So i have a domain name that i would like to host from my static ip internet connection. I have windows server 2008 r2 installed, and dns setup. The dns server is currently behind a firewall, and i have the appropriate rules to allow traffic to reach it. My question is, what entries do i need to create in the DNS so that i can have some nameservers to use at my domain registrar, so that the domain correctly points to the server? I know that most domains have nameservers like ns1.domain.com, ns2.domain.com, etc. What would i point those to in my DNS?

    Read the article

  • Setting Rails up on a Linode - Nginx Issue

    - by rctneil
    I am extremely new to this so please don't shoot me down: I have set up a Linode running Ubuntu, It is all sort of working except Nginx. I am following this guide: http://rubysource.com/deploying-a-rails-application/ And this for nginx: http://library.linode.com/web-servers/nginx/installation/ubuntu-10.04-lucid When I go to my IP, I get a 500 internal server error. I have tried starting nginx and it looks like it starts fine. I run this: ps awx | grep nginx and I get: 308 ? Ss 0:00 nginx: master process /usr/sbin/nginx 2309 ? S 0:00 nginx: worker process 2311 ? S 0:00 nginx: worker process 2312 ? S 0:00 nginx: worker process 2313 ? S 0:00 nginx: worker process 2850 pts/0 S+ 0:00 grep --color=auto nginx I really am not sure what else to do to get it running. Any help? Neil

    Read the article

  • Cannot connect to HTTPS port on Ubuntu

    - by Simpleton
    I've installed a new SSL certificate and set up Nginx to use it. But requests time out when trying to hit HTTPS on the site. When I telnet to my domain on port 80 it connects, but times out on port 443. I'm not sure if there's some defaults on Ubuntu preventing a connection. UFW status shows: 443 ALLOW Anywhere netstat -a shows: tcp 0 0 *:https *:* LISTEN nmap localhost shows: 443/tcp open https The relevant block in the Nginx config is: server { listen 443; listen [::]:80 ipv6only=on; listen 80; root /path/to/app; server_name mydomain.com ssl on; ssl_certificate /etc/nginx/ssl/ssl-bundle.crt; ssl_certificate_key /etc/nginx/ssl/server.key; location / { proxy_pass http://mydomain.com; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }

    Read the article

  • AWS:EC2:: Could not connect FTP client?

    - by heathub
    My Server OS: Amazon Linux I am trying to set up ftp. I have: Installed vsftpd open port 20-21 open port 1024 - 1048 Basically, I followed every of these steps Start vsftpd service (the status indicate [ok]) I use filezilla for my ftp client. Here is my setting/configuration: Host: ec2-XX-XX-XXX-XX.compute-1.amazonaws.com Port: -(blank, but I have tried 20 and 21 though) Server Type: FTP - File Transder Protocol Logon Type: Normal Username: (tried root and ec2-user) Transfer mode: Tried passive and active I always has this error: Status: Waiting to retry... Status: Resolving address of ec2-XX-XX-XXX-XX.compute-1.amazonaws.com Status: Connecting to XX.XX.XXX.XX:21... Error: Connection timed out Error: Could not connect to server Have I missed any configuration/settings? EDIT After execute the /sbin/iptables -L -n Here is the result: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Transport rule - Exchange 2010

    - by Jeff
    I have two transport rules on my exchange server. One is: > Apply rule to messages: From users that are 'outside the organization' > and when any of the recipients in the To or Cc fields is a member of > 'nointernetmail@company.com' Forward the messageto sender's manager > for moderation The second is: Apply rule to messages from a member of 'nointernetmail@company.com' and sent to users that are 'outside the organization' forward the message to the sender's manager for moderation. nointernetmail is a distribution group, and each user has the managed by set to there local manager. However these transport rules do not work, internet mail is still sent and received without issue. I have read various tutorials / articles of how to do this on sites such as msexchangeblog and even microsoft technet, however even after following the guides I am still unable to have this function properly. Any help is appreciated.

    Read the article

  • Best Configuration For Youtube HD 720p

    - by Eray Alakese
    Hello, I'm recording a screencast with CamStudio (http://camstudio.org/) . My computer's screen resolution is 1280*800 , so video's resolution is 1280*800, too. I'm using Microsoft Video 1 codec when recording. I was record 9 minutes video and this video's size is 214 MB . I will upload this video to Youtube. I'm coding a web site at the video, because of this, video must be quality (720p) . I want to reduce file's size, before upload . I'm using Total Video Converter (http://www.effectmatrix.com/total-video-converter/) . But when i convert to FLV , video's size increase to 250MB :) I don't know, how can i configure this setting and which file type should i choose. (screenshot here : http://i.imgur.com/Se0EP.jpg )

    Read the article

  • 500 internal server error

    - by Rockr
    I am facing 500.0 Internal server quite frequently with my website. The error details are given below. HTTP Error 500.0 - Internal Server Error C:\PHP\php-cgi.exe - The FastCGI process exceeded configured activity timeout Module FastCgiModule Notification ExecuteRequestHandler Handler PHP_via_FastCGI Error Code 0x80070102 Requested URL http://mydomain.com:80/index.php Physical Path C:\HostingSpaces\coderefl\mydomain.com\wwwroot\index.php Logon Method Anonymous Logon User Anonymous When I contacted the support team, they're saying that my site is making heavy SQL Queries. I am not sure how to debug this. But my site is very small and the database is optimized. I'm running wordpress as platform. How to resolve this issue?

    Read the article

  • Mailman bounces go to me, do not unsubscribe user! (MacOS Server)

    - by vy32
    I run a large mailing list (30,000 users). Every month I get a whole slew of bounces in my mailbox from the "mailing list memberships reminder" that get sent out. I thought that these were supposed to go to the program and automatically unsubscribe people, not go to me. We are running a standard unmodified MacOS Server installation. Here is a typical bounce: From: Mail Delivery System Subject: Undelivered Mail Returned to Sender To: [email protected].COM This is the mail system at host mail.COMPANY.COM. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to postmaster. If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system ... Any idea what to do?

    Read the article

  • How to suppress "Not collecting exported resources without storeconfigs"?

    - by Andy Shinn
    I'm getting the following in my Puppet master syslog over and over: Sep 27 11:52:05 puppet1 puppet-master: Not collecting exported resources without storeconfigs Sep 27 11:52:06 puppet1 puppet-master: Not collecting exported resources without storeconfigs Sep 27 11:52:06 puppet1 puppet-master: Not collecting exported resources without storeconfigs I'm not actually using storeconfigs: [ashinn@puppet1 ~]$ cat /etc/puppet/puppet.conf [agent] server = puppet.mydomain.com environment = production report = true [main] logdir = /var/log/puppet vardir = /var/lib/puppet ssldir = /var/lib/puppet/ssl rundir = /var/run/puppet factpath = $vardir/lib/facter pluginsync = true certname = puppet1.mydomain.com [master] modulepath = $confdir/environments/$environment/modules manifest = $confdir/environments/$environment/manifests/site.pp templatedir = $confdir/templates autosign = $confdir/autosign.conf ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY report = true reports = hipchat Any way I can suppress these messages? What do they actually come from?

    Read the article

  • Can't access dfs namespace over vpn

    - by cpf
    Hi Serverfault, I've recently configured 2 servers in AD on the same domain level. They are physically separated and permanently connected through a site-to-site vpn for dfs replication. All well, but when users connect to either site through vpn (from home e.g.) they can't use the domain level method: \\domain.com\data Internally this works perfectly, resolving domain.com when connected through vpn gets the correct IP. I've tried Google to figure things out. What I was able to find was that more people have this issue, no real solution found though. Can anyone explain why this is happening? Especially a solution would be really helpful! Thanks in advance.

    Read the article

  • Safari, IIS and optional Client Certificates

    - by Philipp
    I've a ASP.Net Webapp running on IIS7.5. The Webserver is configured to accept Client Certifcates. Unfortunately Visitors with Safari Browser are unable to view the Page. Same Problem as described under the following link: http://www.mnxsolutions.com/apache/safari-providing-an-ssl-error-client-certificate-rejected%E2%80%9D-when-other-browsers-work.html Does anyone knows how to solve this? I'd really appreciate your help. edit: Seems to be the same problem: http://superuser.com/questions/231695/iis7-5-ssl-question-safari-users-get-a-prompt-of-certificate-to-select

    Read the article

  • App / protocol to tune into live audio and video based on schedule or subscription

    - by Richard
    Many of us have embraced the podcasting revolution enabled by rss feeds and podcatchers. Alot of sites now broadcast live streams of what is eventually edited into a podcast. In most cases listening to the live stream gets you the info several days sooner then the podcast. So I was wondering if anybody knows of a notification protocol / app that allows me to auto tune into certain streams when they go live, or based on a schedule. I imagine twitter could be used for the notification but It'd be better not to be tied to a proprietary service. Example podcasts / live streams noagenda.squarespace.com jupiterbroadcasting.com twit.tv

    Read the article

  • Wordpress 3 mutli site install

    - by mike
    Hello, Trying to figure out if this is possible... My company has a cms product that was written in Java and we decided to use Wordpress to run blogs for our clients. Obviously, Wordpress does not run on tomcat(at least not by default) so we installed Pound(http://www.apsis.ch/pound/) on our server and have setup any Apache and Tomcat on different ports. When "/blog/" is requested, the request is directed to Apache. This works fine but we would like to use Wordpress multi site so that we can manage all the blogs from a single interface. We would also like the url for every site to be "/blog/" example: http://www.site1.com/blog/ http://www.site2.com/blog/ I'm thinking it would have to be done with apache??? Is it even possible? Thanks!

    Read the article

  • How can I redirect URLs using the proxy module in Apache?

    - by LearningIT
    This seems like a super-basic question but I am having a hard time tracking down a straightforward solution, so appreciate any help and patience with me on this: I want to configure my Apache proxy server to redirect certain URLs so that, for example, a web browser HTTP request for www.olddomain.com gets passed to the proxy server which then routes the request to www.newdomain.com which sends a response to the proxy server which then passes it back to the web browser. Seems so simple, yet I don't see how to achieve this on Apache. I know Squid/Squirm offer this functionality so am guessing I am missing something really basic. I know I can use RewriteRule to dynamically modify the URL and pass it to the proxy server, but I effectively want to do the reverse, whereby the proxy server receives the original URL, applies the RewriteRule, and then forwards the HTTP request to the new URL. Hope that makes sense. Thanks in advance for any help.

    Read the article

  • Limiting and redirect port access with useragent

    - by linuxcore
    I'm trying to write iptables string match rule To block http://domain.com:8888 and https://domain.com:8888 when it matches the supplied string in the rule. And another rule to redirect the ports also from 8888 to 7777 I tried following rules but unfortunately didn't work iptables -A INPUT -p tcp -s 0.0.0.0/0 -m string --string linuxcore --algo bm --sport 8888 -j DROP iptables -t raw -A PREROUTING -m string --algo bm --string linuxcore -p tcp -i eth0 --dport 8888 -j DROP iptables -t nat -A PREROUTING -p tcp --dport 8888 -m string --algo bm --string "linuxcore" -j REDIRECT --to-port 7777 iptables -A INPUT -t nat -p tcp --dport 8888 -m string --algo bm --string "linuxcore" -j DROP I want to do this from iptables not the webserver because the server may not have a webserver and those ports are working on internal proxy or something like ..etc

    Read the article

  • Inbound connections using Internet Connection Sharing in Apple/Mac/Leopard

    - by tlianza
    I have a Mac mini which I'm using to give some other devices wireless access, by sharing it's Airport connection with the local ethernet, and that is plugged into a switch. All devices can get online no problem. (See how: http://www.macosxhints.com/article.php?story=20041112101646643 and http://www.macosxhints.com/article.php?story=20071223001432304 ) The issue is that I need to be able to connect in to these machines as well (at least, for the Slingbox to work). All the devices have 192.168.2.* addresses, and the rest of my local network is on 192.168.1.*. I tried setting a static route so that the 192.168.2.* addresses would use a gateway of 192.168.1.50 (my mac mini's address) but that didn't seem to help. Does anyone know if what I'm trying to do is possible? I admit I'm not certain what Internet Connection sharing is really doing under the hood... perhaps it just does basic nat, and doesn't do the type of routing I'm looking for. If so, anyone know if this is possible?

    Read the article

  • PowerShell - Limit the search to only one OU

    - by NirPes
    Ive got this cmdlet and I'd like to limit the results to only one OU: Get-ADUser -Filter {(Enabled -eq $false)} | ? { ($_.distinguishedname -notlike '*Disabled Users*') } Now Ive tried to use -searchbase "ou=FirstOU,dc=domain,dc=com" But if I use -SearchBase I get this error: Where-Object : A parameter cannot be found that matches parameter name 'searchb ase'. At line:1 char:114 + Get-ADUser -Filter {(Enabled -eq $false)} | ? { ($_.distinguishedname -notli ke '*Disabled Users*') } -searchbase <<<< "ou=FirstOU,dc=domain,dc=com" + CategoryInfo : InvalidArgument: (:) [Where-Object], ParameterBi ndingException + FullyQualifiedErrorId : NamedParameterNotFound,Microsoft.PowerShell.Comm ands.WhereObjectCommand What Im trying to do is to get all the disabled users from a specific OU, BUT, there is an OU INSIDE that FirstOU that I want to exclude: the "Disabled Users" OU. as you might have guessed I want to find disabled users in a specific OU that are not in the "Disabled Users" OU inside that OU. my structure: Forest FirstOU Users,groups,etc... Disabled Users OU

    Read the article

  • DNS propagation

    - by Paddington
    I have 1 primary DNS server (ns1.mydomain.com) running on Fedora and 2 secondary ones (ns2 and ns3). DNS changes made on my web servers first goes to the primary name server and then propagates to the secondary servers. After making a DNS change on a domain on the web server, I can't see the new dns information on my ns1 when I perform: dig @ns1 A blahblah.com I then went to the master records on the names server (uses named) in the directory /var/named/run-root/var/named/masters and I see the A record has been updated appropriately. Tailing the logs /var/log/messages is not showing any errors. What could be the issue?

    Read the article

  • Why are some web clients requesting a page named "cache"?

    - by Toto
    We see errors like this in the apache error log: [Thu May 17 14:32:35 2012] [error] [client 192.168.1.1] File does not exist: /home/www-data/mywebsite.com/r/cache, referer: http://www.mywebsite.com/r/1010 It is strange because: There is no reference in the code/url about a folder/file "cache". The folder/file "cache" does not exist The client is randomly trying to access a "cache" folder everywhere on the website. It is always trying to access the folder/file "cache" following this pattern: Pattern: /level1/.../levelwhatever/filename (referer) /level1/.../levelwhatever/cache We run a LAMP (Debian stable: PHP 5.3.3-7+squeeze9. We also use APC 3.1.3p1). We use Google Analytics and AdSense. We do not know how to reproduce the problem. Note: I replaced the user's IP in the code for privacy.

    Read the article

  • How to install Gitlab in a VM on a production server?

    - by Michaël Perrin
    I have a production server running Ubuntu 12.04 and I would like to install on it a VM with Gitlab (using Vagrant and Virtualbox). Let's say that the address to access Gitlab is gitlab.mydomain.com . The DNS zone has been configured to point to the IP address of the server. I want users to be able to access to Gitlab (either for pushing to a repository, or for accessing to the web interface) from the outside. The VM has been configured to have an IP address. It means that when browsing http://gitlab.mydomain.com for instance, the request has to be forwarded to the VM on the server, ie. to the VM IP address. What are the ways to configure this? Can Apache be used as a proxy? In this case, I guess it only works for HTTP requests, but not for pushing to a Git repository on the VM.

    Read the article

< Previous Page | 938 939 940 941 942 943 944 945 946 947 948 949  | Next Page >