Search Results

Search found 21717 results on 869 pages for 'setup versions'.

Page 527/869 | < Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >

  • Setting a wireless access point on Ubuntu server 11.10

    - by Solignis
    I am trying to setup a wifi access point with my Ubuntu server. I have managed to get my phone to connect the wireless and now it get a DHCP lease. Though it still cannot ping out or get pinged by anything on my network. I am prety sure my problem is iptables, but I not sure what would be wrong. Here is what my rules look like. (The ones pertaining to the bridge interface) # Allow traffic to / from wireless bridge interface iptables -A INPUT -i br0 -j ACCEPT iptables -A OUTPUT -o br0 -j ACCEPT I am guessing my rules are a little lean, the bridge exists on the same subnet as everything else on my network, I am using a 10.0.0.0/24 subnet. EDIT Oh yeah I should mention also, when I do a ping test, I get Destination Host Unreachable as the error.

    Read the article

  • Issue with Visual C++ 2010 (Express) External Tools command

    - by espais
    Hi all, Normally we develop in VS 2005 Pro, but I wanted to give VS 2010 a spin. We have custom build tools based off of GNU make tools that are called when creating an executable. This is the error that I see whenever I call my external tool: ...\gnu\make.exe): *** couldn't commit memory for cygwin heap, Win32 error 487 The caveat is that it still works perfectly fine in VS2005, as well as being called straight from the command line. Also, my external tool is setup exactly the same as in VS 2005. Is there some setting somewhere that could cause this error to be thrown?

    Read the article

  • IIS not listening over external network, all other traffic working

    - by Beuy
    Hello there, I have a very odd situation, I have a server (let's call it X) running 2008 R2 with two NIC's in it, one is connected to the work domain and has a subnet of 192.168.10.0/24 the other is connected to a ADSL connection and has a subnet of 192.168.1.0/24. The server has IIS installed. On the ADSL connection I have setup a dynamic dns and port forwarding to allow external HTTP, HTTPS, FTP and RDP connections. FTP and RDP are working fine however neither HTTP or HTTPS are working at all. I can browse the websites by going to localhost on the machine, the HTTP and HTTPS ports appear as "Filtered" when I try to scan them using PortQueryUI and browsers respond with a "Server took too long to load or was not responding" error. This was working fine just a few days ago, Windows firewall is disabled I don't have any software firewall on it. And I'm really lost. Any help would be great.

    Read the article

  • Any good method for mounting Hadoop HDFS from another system?

    - by Beel
    I want to mount the Cloudera Hadoop as a Linux file system over the LAN. As a setup, I already have the hadoop cluster running on a set of Ubuntu machines. But now I need to be able to use it as a normal file system from a Fedora system over the LAN. I tried FUSe but two things: 1. Cloudera says FUSE loses data (click here for that comment by a Cloudera employee on the official Cloudera support site) 2. I've had no success making it work the way we want As a point of clarification, I am using Hadoop ONLY for the file system, not for its other capabilities.

    Read the article

  • How to to specify network address for a dhcp client

    - by drecute
    I have setup and configured a DHCP server on a sparc running Solaris 10. Now I want to test the DHCP server by creating a DHCP client on another computer running Solaris 11. I would like to know, how do I specify a network address for a dhcp client such that its generated ip address is within a specific subnet. For example: The DHCP server host = 172.1.1.1 So I want the client machine to have an IP Address in the range of 172.1.1.1 255.255.255.0. Please help me.

    Read the article

  • SPF record doesn't work (not sure which DNS server to tweak)

    - by Ion
    Problem: Google (and perhaps others) marks our emails as SPF neutral. Let me give you some background about the setup: initially got a dedicated server (Hetzner) with Plesk installed to host a domain/web application, let's say: bigjaws.com. Plesk automatically creates a DNS zone for it with some records for the various services it provides out of the box, e.g. webmail.bigjaws.com as a CNAME to bigjaws.com to provide Horde/whatever, etc. Let me point out four relevant of these records (where XXX.XXX.XXX.158 is our dedicated IP): bigjaws.com. A XXX.XXX.XXX.158 mail.bigjaws.com. A XXX.XXX.XXX.158 bigjaws.com MX (10) mail.bigjaws.com. bigjaws.com. TXT v=spf1 +a +mx -all The above records are not(?) valid anymore though, because after using this dedicated server for a while, our site got bigger and bigger so we decided to move our operations over to AWS (EC2, RDS, ELB, etc), but we retained the mail functionality as is, i.e. emails from [email protected] are sent by connecting to our dedicated server where Plesk takes care of things. This was decided in order not to setup anything from scratch. Of course for all DNS-related things we now use Route53. In Route53 I have the following records: mail.schoox.com. A XXX.XXX.XXX.158 bigjaws.com. MX (10) mail.bigjaws.com bigjaws.com. SPF "v=spf1 +ip4:XXX.XXX.XXX.158 +mx ~all" From my understanding of SPF, the SPF status should have been passed: I designate that all email being sent by bigjaws.com from XXX.XXX.XXX.158 are valid/not spam (I added +mx there but I'm not sure if needed). When a mail server receives an email, doesn't it lookup the SPF record of the domain and checks against the IP it got the email from? Checking with spfquery: root@box:~# spfquery -ip XXX.XXX.XXX.158 -sender [email protected] -rcpt-to [email protected] StartError Context: Failed to query MAIL-FROM ErrorCode: (2) Could not find a valid SPF record Error: No DNS data for 'bigjaws.com'. EndError noneneutral Please see http://www.openspf.org/Why?id=employee1%40bigjaws.com&ip=XXX.XXX.XXX.158&receiver=spfquery : Reason: default spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com Received-SPF: neutral (spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com) client-ip=XXX.XXX.XXX.158; [email protected]; If I go to the address listed above (openspf.org) it tells me that the message should have been accepted(!): spfquery rejected a message that claimed an envelope sender address of [email protected]. spfquery received a message from static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) that claimed an envelope sender address of [email protected]. The domain bigjaws.com has authorized static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) to send mail on its behalf, so the message should have been accepted. It is impossible for us to say why it was rejected. What should I do? If the problem persists, contact the bigjaws.com postmaster. Also, here are some headers from an email sent by one of our [email protected] addresses to a gmail.com address (by the way, bigjaws.de listed in the "Received: from" field was the initial domain hosted on the dedicated server before adding the .com one -- both are still listed as separate subscriptions under Plesk). Delivered-To: [email protected] Received: by 10.14.177.70 with SMTP id c46csp289656eem; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) X-Received: by 10.14.102.66 with SMTP id c42mr306186eeg.47.1382515860386; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) Return-Path: <[email protected]> Received: from bigjaws.de (static.158.XXX.XXX.XXX.clients.your-server.de. [XXX.XXX.XXX.158]) by mx.google.com with ESMTPS id l4si19438578eew.161.2013.10.23.01.10.59 for <[email protected]> (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 23 Oct 2013 01:10:59 -0700 (PDT) Received-SPF: neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=XXX.XXX.XXX.158; Authentication-Results: mx.google.com; spf=neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=bigjaws.com; b=WwRAS0WKjp9lO17iMluYPXOHzqRcOueiQT4rPdvy3WFf0QzoXiy6rLfxU/Ra53jL1vlPbwlLNa5gjoJBi7ZwKfUcvs3s02hJI7b3ozl0fEgJtTPKoCfnwl4bLPbtXNFu; h=Received:Received:Message-ID:Date:From:User-Agent:MIME-Version:To:Subject:Content-Type:Content-Transfer-Encoding; Received: (qmail 22722 invoked from network); 23 Oct 2013 10:10:59 +0200 Received: from hostname.static.ISP.com (HELO ?192.168.1.60?) (YYY.YYY.ISP.IP) by static.158.XXX.XXX.XXX.clients.your-server.de. with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 23 Oct 2013 10:10:59 +0200 Message-ID: <[email protected]> Date: Wed, 23 Oct 2013 11:11:00 +0300 From: BigJaws Employee <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0.1 MIME-Version: 1.0 To: [email protected] Subject: test SPF Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit test SPF Any ideas why SPF is not working correctly? Also, are there any DNS settings that are not needed anymore and create a problem?

    Read the article

  • How to put 1000 lightweight server applications in the cloud

    - by Dan Bird
    The company I work for sells a commercial desktop/server app that runs on any non dedicated Windows PC or server and uses Tomcat for all interactions with the application. Customers are asking that we host their instance of the application so they don't have to run it locally on their own servers. The app is lightweight and an average server, in theory, could handle 25-50 instances before users would notice a slowdown. However only 1 instance can run per Windows instance (because the application writes to a common registry branch) so we'd need something like VMWare to create 25-50 Windows instances. We know we eventually need to reprogram to make it truly cloud-worthy but what would you recommend for a server farm or whatever for this? We don't have the setup to purchase our own servers so we must use a 3rd party. We have budgeted $500 - $1000 per year per customer for this service. Thanks in advance for your suggestions, experiences and guidance.

    Read the article

  • DNS Replication issue

    - by BillN
    We host the DNS for our domain. Two weeks ago, the developer requested that we setup a new zone 'dev.ourdomain.com' and place two host records in it my.dev.ourdomain.com and admin.dev.ourdomain.com. We added the zone to our DNS and added A records for the host. Now a week later, some DNS servers like google (8.8.8.8) and gtei (4.2.2.2) will resolve the hosts, but others like OpenDNS (208.67.222.222 ) and ATT Uverse (68.94.156.1) cannot resolve it. Any Ideas?

    Read the article

  • What is the command to use to put your computer to sleep (not hibernate)?

    - by airrick
    I want to put my windows pc (win7) into a sleep state via command line (so i can bind to macro button on keyboard). The power button on the PC is setup to but the computer to sleep (but it's down on the floor and I'm too lazy to reach down) it exactly how I want it (sleeps using hybrid mode in case I loose power) The sleep command on the shutdown menu also works. most info I found says to use; %windir%\system32\rundll32.exe PowrProf.dll, SetSuspendState 0,1,0 But this puts the computer in hibernate mode. I do have hibernate disabled but using hybrid sleep. So, What is the command to use to put your computer to sleep (not hibernate)?

    Read the article

  • Servers at remote sites vs. centralized servers?

    - by Boden
    Looking for some opinions here. We've got three physical locations and site-to-site VPN between all three. Currently we've got Windows domain controllers at each location, with roughly 50 clients at each. The domains are currently separate, and we're looking at integrating the three sites. Email (Exchange) will be located at the primary site, and RPD is already being used at the secondary branches to hit the app servers also located at the primary site. The bulk of the local user load at the other two sites is just file sharing. What would the main benefits and drawbacks be of replacing the local domain controllers with NAS devices, and only keeping the domain controller(s) at the primary site? (assuming upgrades are coming regardless) Under what circumstances would you choose one setup over the other?

    Read the article

  • Releasing software/Using Continuous Integration - What do most companies seem to use?

    - by Sagar
    I've set up our continuous integration system, and it has been working for about a year now. We have finally reached a point where we want to do releases using the same. Before our CI system, the process(es) that was used was: (Develop) -> Ready for release -> Create a branch -> (Build -> Fix bugs as QA finds them) Loop -> Final build -> Tag (Develop) -> Ready for release -> (build -> fix bugs) Loop -> Tag Our CI setup: 1 server for development (DEV) 1 server for qa/release (QA) The second one has integrated into CI perfectly. I create a branch when the software is ready for release, and the branch never changes thereafter, which means the build is reproduceable without having to change the CI job. Any future development takes place on HEAD, and even maintainence releases get a completely new branch and a completely new job, which remains on the CI system forever, and then some. The first method is harder to adapt. If the branch changes, the build is not reproduceable unless I use the tag to build [jobs on the CI server uses the branch for QA/RELEASE, and HEAD for development builds]. However, if I use the tag to build, I have to create a new CI job to build from the tag (lose changelog on server), or change the existing job (lose original job configuration). I know this sounds complicated, and if required, I will rewrite/edit to explain the situation better. However, my question: [If at all] what process does your company use to release software using continuous integration systems. Is it even done using the CI system, or manually?

    Read the article

  • Thunderbird segmentation fault problem

    - by Dariusz Górecki
    Hey all, my thunderbird crashes few seconds after it boots up regardless of in safe mode or normal, here is a strace log: http://pastebin.com/tccfYwcD I've searched google, ubuntu forums, and mozilla bug tracker about this problem, but any of found answers did not helped me :/ I've tried: nscd daemon install solutions posted earlier fresh clean instal of ubuntu 10.10 (32b) and thunderbird 3.1.7 from repos on a VM - problem still exists removed all thunderbrid related dot files and dot dirs, and setup profile from beginning Remove thunderbird and related stuff with apt purge, and install tb from official .deb package None of these steps helps me, tb still crashes with segmentation fault :/ I use gmail IMAP account, I've searched and found few other tips on google, but with no success. I've even tried to remove mails from web gui, that came after first notice of problem, still no luck. I'm not using any network fs, and not sharing this account with anyone, and I do not use filesystem or home encryption either just clean install :/ If You guys need some more info for this let me know. TB: 3.1.7 from repos and official package Ubuntu 32bit 10.10 with medibuntu repos, fully updated

    Read the article

  • Technicolor TG582n with external DHCP server [on hold]

    - by Jack
    We have a small home setup with a Technicolor TG582n on Plusnet ISP. We have a Samba4 DC with DNS forwarding enabled. The DC forwards to the Technicolor. However, the client machines have their DNS settings manually set. This is an annoyance when using laptops on other networks. We would like to have DHCP handled on the server machine, such that when a client connects to the Technicolor, it gets its IP and DNS information from the DHCP server, eliminating the need to manually set adapter DNS settings. However, I cannot find an option to disable DHCP on the Technicolor and am not completely clear on how one would point DHCP services to the server from the Technicolor if there were the option. So, how would one make the Technicolor use an external server for DHCP leases?

    Read the article

  • Apache https configurations

    - by sissonb
    I am trying to setup my domain name with a self signed cert. I created the cert and placed the server.key and server.crt files into C:/apache/config/ Then I updated my httpd.confg host to include the following, <VirtualHost 192.168.5.250:443> DocumentRoot C:/www ServerName mydomain.com:443 ServerAlias www.mydomain.com:443 SSLEngine on SSLCertificateFile C:/apache/conf/server.crt SSLCertificateKeyFile C:/apache/conf/server.key SSLVerifyClient none SSLProxyEngine off SetEnvIf User-Agent ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 CustomLog logs/ssl_request_log \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" </VirtualHost> Now when I go to https://mydomain.com I get the following error. SSL connection error Unable to make a secure connection to the server. This may be a problem with the server, or it may be requiring a client authentication certificate that you don't have. Error 107 (net::ERR_SSL_PROTOCOL_ERROR): SSL protocol error. Can anyone see what I'm doing wrong? Thanks!

    Read the article

  • Managing products on a an ecommerce site [closed]

    - by John
    I've had a site that sells widgets for many years. I do not inventory my widgets, but the cost of adding them to the site and makings sure the site is current is becoming cost prohibitive. Here are the facts: I sell a single class of widget. I have about 50,000 widgets on my site. I have about 100 vendors that create and dropship the products when they get an order from me via email. Each vendor carries from 50 to 5000 types of widgets. Vendors all have websites with images and descriptions of their products. Each widget is produced in limited supply and usually sell out in 1-5 years. Prices of the widget often go up, sometimes more than 50% before they sell out. My vendors aren't very tech sophisticated. They have websites with their products, but most can't supply an api or database dump. Their websites usually display retail prices to the public, but I login or refer to a price list (usually excel) for wholesale prices. As it stands now, I hire local people to add and describe each widget to our website. It usually takes a person 4 minutes to add a widget to the site. This doesn't include moving to a new vendor. I feel like the upload/edit process is as good as it can get via a form/website. The problem is that it is getting very expensive to upload and keep the widget inventory current. I often get orders for something after it's sold out from the vendor or the price is wrong. This seems like it would be a problem in many industries. Can anyone suggest the cheapest way to upload inventory and ensure prices are current from my vendors? I'm assuming it will involve outsourcing, but I would like ideas on how to setup the compensation model.

    Read the article

  • Cable installed - now my hub has no connection the router/modem - what do I need to buy?

    - by bcmcfc
    My previous setup was as follows: [modem/router]------[switch]+------ [pc1] +------ [pc2] I've just moved and had cable installed and I no longer have the option of running a lengthy LAN cable from the router to the switch to provide network and internet access to the two PCs. The cable company provided 2 wireless N USB adapters. What do I need to buy to plug into where in order to restore the network to its previous state? PC1 dual boots Windows 7 and Ubuntu 12. PC2 runs Debian 6. Edit: USB adapters - Netgear WNDA3200 Switch - TP-Link TL-SF1008D 8 port Ethernet switch Cabling - various patch cables cat5e rj45 Modem/Router - pretty standard cable company job - wireless Intention is something like- [modem/router] --wifi-- [some-new-hardware or perhaps to pc1] ----[switch]---[pc1/2]

    Read the article

  • Is it possible to have multiple subdomains point to the same Blogger blog?

    - by cclark
    For our application we want to have a status page which is hosted outside of the rest of our infrastructure so in case there are issues in our data center we can post updates for our users and our users will be able to access them. We registered a blog on Blogger and set it up with xyzstatus.blogspot.com and status.xyz.com. Everything seems to work fine. We need to perform some maintenance at our datacenter which will sever all connectivity so we're unable to have a redirect using nginx or apache. We'd like to do this with a short TTL CNAME DNS entry. Ideally www.xyz.com and app.xyz.com could be CNAMEd to status.xyz.com. When I setup the CNAME and go to that URL I get a Google broken robot 404 page. I figure I must need to let Google know it should associate traffic to www.xyz.com and app.xyz.com to the blog served up by status.xyz.com. But I can't see anywhere to do this in Blogger. Does anyone know if this is possible?

    Read the article

  • virtual box upgrade

    - by Husni
    I did upgrade virtualbox from 4.1 to 4.2 wheneverver I want to load my win xp vdi, it gives me the following error: "Kernel driver not installed (rc=-1908) The VirtualBox Linux kernel driver (vboxdrv) is either not loaded or there is a permission problem with /dev/vboxdrv. Please reinstall the kernel module by executing '/etc/init.d/vboxdrv setup' as root. If it is available in your distribution, you should install the DKMS package first. This package keeps track of Linux kernel changes and recompiles the vboxdrv kernel module if necessary." I ran the suggested step to reinstall the kernel module, and the log file files is as follow: Makefile:181: * Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop. Makefile:181: * Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop. Makefile:181: * Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop. I still unable to re-run my win virtual XP vdi file. anyone have a clue?

    Read the article

  • can not connect via SSH to a remote Postgresql database

    - by tartox
    I am trying to connect via pgAdmin3 GUI to a Postgresql database on a remote server myHost on port 5432. Server side : I have a Unix myUser that match a postgresql role. pg_hba.conf is : local all all trust host all all 127.0.0.1/32 trust Client side : I open an ssh tunnel : ssh -L 3333:myHost:5432 myUser@myHost I connect to the server via pgAdmin3 ( or via psql -h localhost -p 3333 ). I get the following error message : server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. I have tried to access a specific database with the superuser role using psql -h localhost -p 3333 --dbname=myDB --user=mySuperUser with no more success. What did I forget in the setup ? Thank you

    Read the article

  • How can I allow individual developers to have their own space to create git repositories?

    - by Jason Baker
    I have a server that is essentially a gitosis setup. I have a git user that has access to all the shared repositories. What I would like to do is have each developer be able to have their own "area" on this server to create their own repositories. I'd like these areas to be able to be viewable via gitweb. How can this be done that would require the least maintenance in terms of adding users and repositories? One obvious solution would be to just allow each developer to create repositories on the git login and have branches named something like <devname>-<reponame>. But I could see this getting unmanageable as the number of developers grows.

    Read the article

  • proxy software that supports parallel transfer

    - by est
    Hi guys, I need to setup a really fast proxy server in a remote server, here's the scenario: The server prefetches 3KB of data, mostly HTTP resources. The server send to client 3KB of data, instead of traditional HTTP or SOCKS proxy, the server open multithreaded transfer with 3 connections, send 1KB of data per thread to each connection The client receives 1KBx3, and combine them to the original 3KB data, and return as a local HTTP proxy server. The client display the original data in browser via the local HTTP proxy The latency is not important as long as the transfer rate is good. Does any software like this exist? It's better if it's open source or free ones.

    Read the article

  • Server 2008 R2 Remote Desktop Gateway Role and IIS7

    - by user137466
    I am attempting to setup a RD Gateway for a client. When I first set it up I noticed that IIS did not have the 'Defualt Web Site' so I created it and assigned it an id of 1 and set the bindings to port 80 and 443. I then re installed the RD Gateway role with the idea that it would then configure IIS correctly. It did not. How would I go about making sure a re install of the Remote Desktop Gateway role configures IIS correctly? I cannot re install IIS as there is a site that is already on there that I cannot take down

    Read the article

  • Purpose oriented user accounts on a single desktop?

    - by dd_dent
    Starting point: I currently do development for Dynamics Ax, Android and an occasional dabble with Wordpress and Python. Soon, I'll start a project involving setting up WP on Google Apps Engine. Everything is, and should continue to, run from the same PC (running Linux Mint). Issue: I'm afraid of botching/bogging down my setup due to tinkering/installing multiple runtimes/IDE's/SDK's/Services, so I was thinking of using multiple users, each purposed to handle the task at hand (web, Android etc) and making each user as inert as possible to one another. What I need to know is the following: Is this a good/feasible practice? The second closest thing to this using remote desktops connections, either to computers or to VM's, which I'd rather avoid. What about switching users? Can it be made seamless? Anything else I should know? Update and clarification regarding VM's and whatnot: The reason I wish to avoid resorting to VM's is that I dislike the performance impact and sluggishness associated with it. I also suspect it might add a layer of complexity I wish to avoid. This answer by Wyatt is interesting but I think it's only partly suited for requirements (web development for example). Also, in reference to the point made about system wide installs, there is a level compromise I should accept as experessed by this for example. This option suggested by 9000 is also enticing (more than VM's actually) and by no means do I intend to "Juggle" JVMs and whatnot, partly due to the reason mentioned before. Regarding complexity, I agree and would consider what was said, only from my experience I tend to pollute my work environment with SDKs and runtimes I tried and discarded, which would occasionally leave leftovers which cause issues throught the session. What I really want is a set of well defined, non virtualized sessions from which I can choose at my leisure and be mostly (to a reasonable extent) safe from affecting each session from the other. And what I'm really asking is if and how can this be done using user accounts.

    Read the article

  • Processor upgrade on a laptop vs. Ram upgrade. Also does ram always matter?

    - by Evan
    I have a Dell Inspiron 14r (N4110) with an Intel Core i3 and 4gb of ram. It runs very smoothly, however gaming on this laptop is very limited. This is mostly because of integrated graphics but i have seen a computer with a Core i5, and very similar specs otherwise, run games that the N4110 cannot. This other computer has integrated graphics and 6gb of ram. I am wondering whether upgrading ram or upgrading the processor make the most difference in performance. Which setup would get better performance, an i5 with 4gb of ram or an i3 with 8 gb of ram? (Both with integrated graphics) Also, is there a certain point at which you have too much ram for the computer to ever possibly use? For instance is there really any difference in performance between 8 gb and 16 gb of ram?

    Read the article

  • Postfix/dovecot remove LDAP user

    - by dove221
    I have to remove or blacklist an LDAP/dovecot user. The authentication is setup from active directory what I cannot manage so I thought there should be a way at least to disable this specific user on the mailserver locally. # Virtual Accoutns - LDAP - MS AD virtual_mailbox_maps = ldap:/etc/postfix/ldap_mailbox_maps.cf virtual_alias_maps = ldap:/etc/postfix/ldap_alias_maps_redirect_true.cf ldap:/etc/postfix/ldap_alias_maps_redirect_false.cf ldap:/etc/postfix/ldap_mailbox _groups.cf virtual_mailbox_domains = domain.com virtual_uid_maps = static:1000 virtual_gid_maps = static:1000 virtual_transport = dovecot dovecot_destination_recipient_limit = 1 Anybody knows how to do it? I followed this guide for disabling 1 user through postfixes access file: http://www.cyberciti.biz/faq/howto-blacklist-reject-sender-email-address/ Unfortunately it doesn't work. It's like the settings stored in LDAP are overruling the access rule. Instead of postfix rejecting the mail it keeps accepting it. Thanks!

    Read the article

< Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >