Search Results

Search found 11953 results on 479 pages for 'functional testing'.

Page 266/479 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • Domino to Exchange 2007 (or 2010) Design Concerns?

    - by NickToyota
    Today we got the executive green light to proceed with changing from a Domino platform to Exchange. The business prefers Exchange for a messaging platform. (even though IMO IBM Domino is fine - if it aint broke, don't fix it but it was not my call). I have been put in charge of Domino to Exchange process goes smoothly as possible. I have also been told to put together costs for this project. I have some questions and concerns re: network design, licensing, costs: The current setup is as follows. 1 HQ office (100 users), 1 secondary office (50 users), 5 branch offices (under 10 users). 5 different email domains Windows Server 2003 functional level with a few 2008 R2 Servers Lotus Domino Notes Servers (one in each office) Ironmail Appliance Public Domino Web Mail server Majority G5+ Proliant Servers Domino Blackberry Enterprise license and server No VoIP phones What are the basic hardware requirements for Exchange 2007 or 2010? Can I simply purchase a single physical server? Will each office require an Exchange server or possibly additional servers (roles)? How is email routed to the smaller branch offices? Standard or Enterprise licenses? The business has been running Domino (messaging and application services) for over 10 years and also want Exchange to support email services, Blackberry, Outlook Web Access, possibly support for iPhone devices. Thank you Serverfault universe.

    Read the article

  • DFS replication and the SYSTEM user (NTFS permissions)

    - by HopelessN00b
    Question for which I'm having trouble finding an answer on Google or Technet... Does granting the SYSTEM user permissions to DFS-shared files and folders have any effect on DFS replication? (And while we're at it, is there any good reason not to let SYSTEM have permissions to DFS-shared files?) It comes up because I have a collection of DFS namespaces and folders that I'm not able to make someone else's problem, and while troubleshooting a problem where one DFS replica just wasn't replicating with another for no discernible reason, I observed that the SYSTEM didn't have any permissions granted to any of the files or folders in the folder in question. So I set SYSTEM to have full control and propagated it down, and our DFS health diagnostic reports went from showing a backlog of ~80 files to a backlog of ~100,000... and things started replicating, including a number of files that had been missing on one server or the other (so more than just the permissions changes started replicating). Naturally, this made me curious as to whether or not DFS needs the SYSTEM account to have permissions to do its work, or if perhaps it was just any change to folder tree in question that prompted DFS to jump into action. If it matters, our DFS namespaces were set up under 2000/2003, and I have just recently finished upgrading all the servers to 2008 R2 or 2012 (with UAC enabled, blech), but have not yet gotten around to raising the DFS namespace functional levels to Server 2008. (And bonus points if anyone has an official Microsoft article on NTFS file permissions and the SYSTEM account as it pertains to DFS or network files.)

    Read the article

  • Getting a TTY in a Connectback Shell

    - by Asad R.
    I'm often asked by friends to help with small Linux problems, and more often than not I'm required to login to the remote system. Usually there are a lot of issues with making an account and logging in (sometimes the box is behind a NAT device, sometimes SSHD isn't installed, etc.) so I usually just ask them to make a connect-back shell using netcat (nc -e /bin/bash ). If they don't have netcat I can just ask them to grab a copy of a statically compiled binary which isn't that hard or time consuming to download and run. Though this works well enough for me to enter simple commands, I can't run any apps that require a tty (vi, for example) and can't use any job control functions. I managed to bypass this issue by running in.telnetd with a few arguments within the connect-back shell that would assign me a terminal and drop me to a shell. Unfortunately in.telnetd isn't usually installed by default on most systems. What's the easiest way to get a fully functional connect-back terminal shell without requiring any non-standard packages? (A small C program that does the job would be fine as well, I just can't seem to find much documentation on how a TTY is assigned/allocated. A solution that doesn't require me to plough through the source code for SSHD and TELNETD would be nice :))

    Read the article

  • How do I switch java versions to an earlier version in Fedora 17?

    - by JHutson456
    I just installed Fedora 17. I'm setting up the Android Build Environment and need Java. I downloaded and installed jdk-6u32-linux-amd64.rpm I ran java -version and it spit out the correct version. Well a day or two later i tried my first compile in Fedora 17 and it complained about java and failed. I ran java -version again and low and behold it spits out $ java -version java version "1.7.0_03-icedtea" OpenJDK Runtime Environment (fedora-2.1.fc17.7-x86_64) OpenJDK 64-Bit Server VM (build 22.0-b10, mixed mode) I'm stumped. I mean, i've run the update/upgrade commands since i installed but i didn't think that updated full version revisions... So, I ran alternatives --config java and that only gave me the java 1.7 version. While digging around more I discovered that the recommended version of Java for the build environment is jdk-6u27-linux-x64-rpm.bin so I downloaded that from here: Oracle Download When I ran: sudo sh jdk-6u27-linux-x64-rpm.bin it returned: Unpacking... Checksumming... Extracting... UnZipSFX 5.50 of 17 February 2002, by Info-ZIP ([email protected]). inflating: jdk-6u27-linux-amd64.rpm inflating: sun-javadb-common-10.6.2-1.1.i386.rpm inflating: sun-javadb-core-10.6.2-1.1.i386.rpm inflating: sun-javadb-client-10.6.2-1.1.i386.rpm inflating: sun-javadb-demo-10.6.2-1.1.i386.rpm inflating: sun-javadb-docs-10.6.2-1.1.i386.rpm inflating: sun-javadb-javadoc-10.6.2-1.1.i386.rpm Preparing... ########################################### [100%] package jdk-2000:1.6.0_32-fcs.x86_64 (which is newer than jdk-2000:1.6.0_27-fcs.x86_64) is already installed Done. so now I'm confused. I ran: alternatives --config java again but it's still only returning 1.7 so I don't know what to do.I want to end up with 6u27 as the installed and functional version of the JDK. Thank you.

    Read the article

  • Ubuntu rm not deleting files

    - by ILMV
    My colleague and I have been struggling with deleting a directory and its contents. We are working on a new version of our websites source code on Ubuntu 8.04 (dir: /var/www/websites), what we want to do is delete the websites directory and recreate it from a .tar backup we created a couple weeks ago. The purpose of this is so we can run our deployment procedure in a local environment before we do so on our live / public environment. We use this command: rm -r websites This deletes the directory and the files within it. The problem occurs when we un-tar our backup file and view the website we are getting files that don't exist in the .tar backup, in fact these files were only created a few days ago and should have been deleted. We delete the directory once more in the manner stated above, we then create a new websites directory using the mkdir command. Strangely at this stage the 'deleted files' do not come back, but if we unpack our .tar file the 'deleted files' appear again. Is there a way to ensure these files are deleted, or at least the pointers that associate them with said directory. Our .tar backup does not include these files We do not want to use the shred command We do not want to use 3rd party applications Solution should be functional via terminal (SSH) Many thanks! EDIT Er... we fixed it. Turns out the files that are reappearing are because of a link we have to another directory (outside the /var/www/websites), we were restoring the link but not deleting the files on the other end. D'oh! Many thanks for your help guys... friday afternoon syndrome :-)

    Read the article

  • How can I get my routers to forward ports correctly?

    - by Giffyguy
    My network currently looks like this (simplified): Note that Router #2 is connected to the LAN interface of Router #1. This should be familiar to anyone who has seen a standard static-IP setup with an additional firewall for a residence or other small building. Router #1 is actually my cable gateway, but since it is a fully functional router/firewall, I am going to refer to it as a router. Now, I need to open various ports in both firewalls for incoming communication to my server - port 80 is a good example. So I've opened up port 80 in Router #2, and so far all incoming traffic at the public IP X.X.X.129 is being routed correctly. The problem is that I also need my server to respond to incoming traffic at the public IP X.X.X.130 on the WAN interface of Router #1. Naturally, I can't just tell Router #1 to forward port 80 to another public IP. Port forwarding is only supported when the traffic is being directed to the LAN subnet. I am willing to restructure my network topology if required, with the following conditions: Router #1 cannot have its WAN IP reassigned - X.X.X.130 is mandatory. Router #1 cannot be moved or disconnected from the cloud. The server cannot be given a second IP address. I would prefer the server to have a private IP address - e.g. 10.0.0.10 I'd like to keep Router #2, but it can have a private IP - e.g. 10.0.1.10 Following these rules, I need to get my server to receive incoming traffic on port 80 from both public IP addresses. Does anyone on SU know if this is possible? So far my only theories have been to set up a static route on either router, or to somehow combine my two subnets into a single subnet.

    Read the article

  • Fast Ethernet module for Cisco 2620

    - by Kenny Rasschaert
    I have a Cisco 2620 Router. It comes with one fast ethernet port built in (circled in red), and one old AUI ethernet module is installed (circled in blue). I figure I can put a transceiver on the AUI interface to get a second RJ45 connector. What I'd really like to have is a second fast ethernet connector. The ideal candidate to achieve this would be the NM-1FE-TX module. Cisco claims on their website that this module is not suitable for the Cisco 2620 and Cisco 2620XM. It says so in "Table 2 Physical Limitation of Serial Modules per Chassis". Indeed, this module was designed for the 3600 series of routers. I've seen claims on the internet, however, of people having this module on a 2620XM, and it being fully functional. This claim gains some credibility because of the fact that in Cisco's own Packet Tracer software, you can install this module on the 2620XM router. I'm looking for a definitive answer. Will this module work on a Cisco 2620? Is there perhaps another way to get a second fast ethernet port on this device?

    Read the article

  • How to restore broken Ethernet functionality on Mac G5 running Mac OS 10.4.11 (Tiger)

    - by willc2
    I had a disk error that rendered my Mac unbootable. I repaired it with Tech Tool 4, but now networking does not work. Network Preferences reports that my Ethernet cable is unplugged. I know this is bogus because when I boot from an emergency partition, networking works correctly. Furthermore, wireless networking is also broken, which I tested with a known-good Wi-Fi dongle. Whenever I try to change Network Port Configurations by creating a New or Renaming an existing one, example: I get this message in the console: Error - PortScanner - setDevice, device == nil! Error - PortScanner - setDevice, device == nil! In sets of two as shown. When I try to invoke the Network Diagnostics app, it immediately crashes. My first thought is to reinstall Tiger with the Archive and Install method so I don't have to reinstall all my applications but I have lost my Tiger installer disk. My next thought is to buy Leopard for $107 on Amazon. If there is any way I can just repair my Tiger install I would be happy to save that money, though. This is not my main machine and I am loathe to put more money into it. How can I recover my network functionality? UPDATE: I found my Tiger install disk and tried an Archive and Install. It failed with an unhelpful error message along the lines of "Can't install, try again". I tried again but had the same error. My guess is, some corrupt or missing file in my User folder is preventing migration. I have a backup created with Super Duper that is a bit out of date but will startup the machine (with functional networking). I would love to just copy over the file(s) that got messed up but I don't even know where to look. What is the likely location of the System files that would cause the aforementioned symptoms?

    Read the article

  • Password Policy seems to be ignored for new Domain on Windows Server 2008 R2

    - by Earl Sven
    I have set up a new Windows Server 2008 R2 domain controller, and have attempted to configure the Default Domain Policy to permit all types of passwords. When I want to create a new user (just a normal user) in the Domain Users and Computers application, I am prevented from doing so because of password complexity/length reasons. The password policy options configured in the Default Domain Policy are not defined in the Default Domain Controllers Policy, but having run the Group Policy Modelling Wizard these settings do not appear to be set for the Domain Controllers OU, should they not be inherited from the Default Domain policy? Additionally, if I link the Default Domain policy to the Domain Controllers OU, the Group Policy Modelling Wizard indicates the expected values for complexity etc, but I still cannot create a new user with my desired password. The domain is running at the Windows Server 2008 R2 functional level. Any thoughts? Thanks! Update: Here is the "Account policy/Password policy" Section from the GPM Wizard: Policy Value Winning GPO Enforce password history 0 Passwords Remembered Default Domain Policy Maximum password age 0 days Default Domain Policy Minimum password age 0 days Default Domain Policy Minimum password length 0 characters Default Domain Policy Passwords must meet complexity Disabled Default Domain Policy These results were taken from running the GPM Wizard at the Domain Controllers OU. I have typed them out by hand as the system I am working on is standalone, this is why the table is not exactly the wording from the Wizard. Are there any other policies that could override the above? Thanks!

    Read the article

  • Kernel hacking methodology - how to find out where to hack the linux kernel

    - by Flavius
    I have a throw-away cheap laptop I'd like to twiddle around, a Thinkpad SL 500. What bothers me are two leds, the one for wireless connectivity, and the one for hibernation, which don't light up at all, although they're functional, I've tried it on windows. So I would like to write a kernel driver for them, nothing big, it just looks like a good idea to play around with the kernel. My question is what methodology should I follow systematically to find out what devices are responsible for those leds (in general, not necessarily specific to my hardware), and what drivers are responsible for the other two leds that work, bluetooth and the battery indicator? And when I say methodology, I really mean the methodology, step by step, with reasons for each step, like in the answer I've gave to someone else over here: What does && mean in void *p = &&abc; I am profficient at fgrepping through big code repositories, using static code analysers & co, but I think my lack of hardware knowledge hinders me on this problem. PS: I'm using ArchLinux, so almost the latest kernel version.

    Read the article

  • Diagnosing SAN connectivity issues (RHEL5)

    - by Matthew
    We are currently utilizing GFS2 to share a SAN LUN between 3 servers. However due to a feature problem with vendor software we are using, we currently have the volume unmounted on two of the boxes, and are instead exporting the GFS2 filesystem via NFS from the first one (the software requires some weird locking mechanics that GFS2 doesn't support). As of this morning, NFS was no longer able to read/write to the volume from any of the servers, including the NFS server. I then tried checking the normal mount (the directory that is exported on the NFS server) and I received a weird input/output error just trying to CD into it. When I tried running multipath, I got a DM error, however multipath -l worked just fine. I tried unmounting the GFS2 volume, and the CLI hung. I ran init 0 which killed most services, but then the shutdown appeared to have been hung. I logged in via out of band access (hp ILO) and saw that the shutdown was hung trying to unmount GFS2 volumes. My main priority was getting the box back online so after about 5 minutes of waiting I did a hard reset. I am now trying to figure out what went wrong. What are the correct logs to investigate? I've never run into SAN issues like this before. The SAN is connected via 2 fibre connections. Any help would really be appreciated. Everything appears to be up and functional now.

    Read the article

  • Cisco Pix 515 ip addressing

    - by Rickard
    I have just gotten my hands on a Cisco Pix 515 (not 515E) with 3 interfaces, and are just about to start some labs. In my lab, I am using a real life scenario from an actual setup at work. As I have no access to the device at work, I am simply trying to replicate the scenario by trial and error. At work, we are given two IP addresses from the provider, which is 1-to-1 nated addresses. The addresses we are allowed to use are: 10.131.35.4-5/29 Now, we have 3 servers on a DMZ, 192.168.2.2-4/24 and 17 client computers on 192.168.1.100-117/24 aswell as some static addressed devices on 192.168.1.8-18/27 My question is, how would I best set up so that the machines on the DMZ get's translated to 10.131.35.4 and the machines on 192.168.1.* will be translated to 10.131.35.5 I don't expect or want anyone to give me a fully functional config, I may learn from it, but I'd prefer to just have some advices or maybe a guide on how to set it up. I hope someone can shed some light over my situation, have been looking through google but I guess I don't the searchwords I'm using isn't too good as I can't find any good clues. THank you very much! PS. Maybe I should add, I am not unfamiliar with the Cisco CLI, as I prefer using that before any gui's. So not really looking for any solutions for the ASDM. DS.

    Read the article

  • Virtual machines interconnection inside Proxmox 2.1 Cluster

    - by Anton
    We have 3 physical servers (each with 1 NIC) in different datacentres, all of them are interconnected by openvpn bridged private network (10.x.x.x). Inside this network we have fully functional 3 nodes Proxmox 2.1 cluster. So, actually question is: Is there any "proper" way to make "global" local network (172.16.x.x) for all VMs inside cluster, so even if we move VM from one node to other we could reach it by static IP regardless of it's physical location? BTW, we can't add dedicated NIC to each server. Thanks in advance. EDIT: I have tried to make a separate openvpn bridge for 172.16.x.x, now I have at each server two interfaces: SRV1: openvpnbr1 - 172.16.13.1 vmbr0 - 172.16.1.1 SRV2: openvpnbr1 - 172.16.13.2 vmbr0 - 172.16.2.1 But now there is no connection between those ifaces: SRV1: ping 172.16.13.2 From 172.16.1.1 icmp_seq=2 Destination Host Unreachable SRV2: ping 172.16.13.1 From 172.16.2.1 icmp_seq=2 Destination Host Unreachable If I shut down vmbr0 interfaces, so there is connection between servers over openvpn, but vmbr0 is used by Proxmox... Where I am wrong?

    Read the article

  • Why is my global security group being filtered out of my logon token?

    - by Jay Michaud
    While investigating the effects of filtered tokens on my file permissions, I noticed that one of my global security groups is being filtered in addition to the regular system-defined filtered groups. My Active Directory environment is a single-domain forest on the Windows Server 2003 functional level. I'll call the domain "mydomain.example.com". I am logged onto a Windows Server 2008 Enterprise Edition machine (not a domain controller) as a member of the "MYDOMAIN\Domain Admins" group and the "MYDOMAIN\MySecurityGroup" global security group (among others). When I run "whoami /groups" from an elevated command prompt, I see the full list of groups to which my account belongs as expected. When I run "whoami /groups" from a regular, non-elevated command prompt, I see the same list of groups, but the following groups are described as "Group used for deny only". BUILTIN\Administrators MYDOMAIN\Schema Admins MYDOMAIN\Offer Remote Assistance Helpers MYDOMAIN\MySecurityGroup Numbers 1 through 3 above are expected based on Microsoft documentation; number 4 is not. The "MYDOMAIN\MySecurityGroup" global security group is a group that I created. It contains three non-built-in global security groups, and these security groups contain only non-built-in user accounts. (That is, I created all of the accounts and groups that are members of the "MYDOMAIN\MySecurityGroup" global security group.) There are other, similar groups of which my account is a member that are not being filtered out of my logon token, and this group is not granted any specific user rights in the security settings of this computer or in Group Policy. What would cause this one group to be filtered out of my logon token?

    Read the article

  • In spite of correct DNS, Exchange sending to wrong destination server for single outbound domain

    - by beporter
    My company uses an SBS 2003 server and makes use of Exchange to host our own email. We also have a linux server hosting domains for some of our clients. In order for us to send to those clients, we had internal DNS set up to shadow the client domains to provide "correct" MX records inside our network. For example, public DNS for a domain abc.com might point to 1.2.3.4, but internally we have MX records set up to route mail for abc.com to 172.16.0.4, which is the linux email server. This setup was entirely functional; this is just back story. We've recently moved one of our client domains from our internal linux server to an external email provider. When we did that, we naturally deleted our internal shadow DNS records so our Exchange server would fetch correct (public) DNS records and route mail out to the new external host. This has NOT had any effect on Exchange though. Even after rebooting the Exchange server and completely flushing the DNS cache (nslookups on the Exchange machine itself correctly resolve to the new external address) Exchange still attempts to deliver messages for the domain to our internal server! Exchange correctly routes to all other internal and external domains when sending email. Somehow Exchange is trying to deliver to a machine that by all accounts it has no business trying to use for just this one domain. Is there a DNS cache that Exchange uses internally? Is there a way to flush that internal cache? What else could I be missing?

    Read the article

  • Creating an Apache Virtual Directory, but updating Active Directory DNS

    - by SnoConeGod
    Hello all, I'm just getting started with using the Zend Framework and am following a recommended procedure where I am supposed to create an Apache Virtual Directory for the public-facing portion of a new Zend project. I don't THINK I had any issues creating the Virtual Directory, but my knowledge of the required DNS changes is rather lacking. The dev server I'm using is on a Microsoft Windows Active Directory domain, so I've added A records for both the server name and the subdomain. Still, trying to browse to the site from a Windows 7 PC isn't working properly. What am I missing? What's the proper set of steps for getting an Apache-served subdomain to appear properly in a peer computer's web browser? Details below: server: Debian command-line only, freshly installed today with Zend Server CE LAMP stack server name: ZENDEV subdomain: SQUARE.ZENDEV AD Domain functional level: 2008 mixed (run by a mishmash of 03 and 08 servers) attempting to visit the sites: http://square.zendev and http://square.zendev.domain.local (name of domain redacted, but using the local (not com) suffix) Apache Virtual Directory added to httpd.conf: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot "/var/www/square/public" ServerName square.localhost </VirtualHost> Is this only a problem with DNS? Or with DNS and my Virtual Directory? Thanks! John

    Read the article

  • How do I migrate Exchange 2007 to new hardware?

    - by Graeme Donaldson
    As per my previous question, I have an Exchange 2007 box which is also a DC. Since I can't demote it while Exchange is installed, I want to move Exchange to a different server. Does anyone have any articles, tips or experiences to share on this? The last time I did this it was with Exchange 2003 and even that is a little rusty in my head. The setup is a single Exchange 2007 Hub/Edge/Mailbox/CAS server. Its currently on Windows Server 2008, I can migrate it to the same OS, or I can go to 2008 R2, I'm not really picky on that. We're running OWA/ActiveSync/POP3(S)/IMAP(S) for client access. I already have another fully functional DC/GC/DNS box in the same site and clients in the site are already using that for DNS. It's also the preferred site bridgehead for AD replication. Update: After reading Evan's answer I realised that my original question wasn't worded correctly. I'm not looking to do a swing migration, I actually need to move Exchange completely over to a new box. I have done swing migrations in the past, i.e. moving over to a temporary box and back to the original hardware afterwards, and I'm not really sure why I used that term in the original question since it's not what I intended. Any tips?

    Read the article

  • A-2-Z web hosting on Amazon AWS

    - by JDelage
    All, I am studying web dvp, and one of my classes is project-based. We have to build a functional site that demonstrate our understanding of: HTML, CSS, Javascript, php, MySQL, And potentially Ajax or some other web component. For the project, we can use a local server using WampServer and basically build the site entirely on our laptop. If I have time, I would like to create a real site, and I thought it would be a good way to familiarize myself with Amazon's AWS services. So if I purchase a domain name, can I rely on AWS to host the site from A-to-Z? I understand I can use AWS to host content, the database, and do the background computations, if needed. What else do I need and what are the parts that AWS cannot help me with? Second, is there good documentation for a beginner to navigate AWS and learn how to use it (either on Amazon, or some 3rd party sites, or even a good book, as long as is up to date). The ideal documentation would be a tutorial on creating a web site from a-to-z on AWS, as detailed as possible. As you can guess, I have limited understanding of the IT issues. I have 0 Linux or sysadmin experience, but this is a good opportunity to change that. I hope you can help me. Thank you, JDelage PS: Please keep the answers AWS-specific. At this point, I am only interested in alternative services to the extent that they plug a hole in Amazon's offering.

    Read the article

  • Monitor goes black for a few seconds

    - by privatehuff
    I have a Hanns G 28" monitor, Model # HG281D It has its issues (viewing angle sucks) but has been functional and solid, great for desktop stuff. Worked without any sign of any problems for 6-12 months. However, now the monitor "goes black" for about 2-3 seconds, almost like when you click "detect display" It does not turn off (power light does not go amber) The computer is completely unaffected and the video mode never changes when the picture returns. The computer is fully responsive and will keep playing music or taking my keypresses during the time I can't see anything. (it just happened and I kept typing, etc) It happens on multiple computers across several operating systems. (I have an 8-port iogear KVM switch that has several computers connected) But, it seems to happen only on certain computers. I have a hackintosh that does it, a windows 7 PC that does not, a lenovo laptop that does not, and my old ubuntu 8.10 box did not do it, but my new mint 8 box does do it. I've check the connections and tried changing out the power cable and the vga cable. Sometimes it won't happen for hours (or days) and sometimes it happens several times per hour. It was happening many months ago, did not happen for months, and has now started happening again. Does this make any sense? What could it be?

    Read the article

  • Translating IPTables rule to UFW

    - by Dario Fumagalli
    we are using an Ubuntu 12.04 x64 LTS VPS. Firewall being used is UFW. I have setup a Varnish + LEMP setup. along with other things, including an Openswan IPSEC VPN from our office to the VPS data center. A second in house Ubuntu box is to act as MySQL slave and fetch data from the VPS through the VPN. Master's ppp0 is seen as 10.1.2.1 from the slave, they ping etc. I have done the various required tasks but I can't get the client (slave) MySQL (nor telnet 10.1.2.1 3306) to access the master through the VPN unless I issue this fairly obvious IPTables command: iptables -A INPUT -s 10.1.2.0/24 -p tcp --dport 3306 -j ACCEPT I willingly forced the accepted input to come from the last octet. With this rule everything works just fine! However I want to translate this command to UFW syntax so to keep everything in one place. Now I admit being inexperienced with UFW, I prepared rules like: ufw allow proto tcp from 10.1.2.0/24 port mysql and 2-3 variations involving specifying 3306 instead of mysql, specifying a target IP (MySQL's my.cnf at the moment is configured as 0.0.0.0) and similar but I just don't seem to be able to replicate the simple iptables rule in a functional way. Anyone could kindly give me a suggestion that is not to dump UFW? Thanks in advance.

    Read the article

  • Windows Domain Chaos - Any Solving Approach

    - by Chake
    we are running an old Window 2003 Server as Domain Controller (DC2003). To safely migrate to Windows 2008 R2 we added a 2008 R2 (DC2008R2) to the domain as domain controller (adprep etc.). After dcpromo on DC2008R2 everything seemed to be ok. The new DC appeared under the "Domain Controlelrs" node. It wasn't checked at this time, if DC2008R2 can REALLY act as domain controller. Later we tried to shutdown DC2003 and ran into a total mess with non functional Exchange and Team Foundation Services. After that I got the job to fix... First i thought it could be an Problem with DC2008R2. So I removed it as Domain Controller and installed a new Windows 2008 R8 Server DC2008R2-2. I ran into similar Problems. I tried a bunch of stuff, but nothign helped. I won't list it, maybe I made an mistake, so I'm willing to redo it with your suggestions. To have a starting point I tried the best practise analyser whicht ended up with 24 "Compatible" and 26 "Not Compatible" tests. From these 26 tests 19 read the same. (I'm translating from german, so that may to be the exact wording) Problem: Using the Best Practise Analyser for Active Directory Domain Services (Active Directory Domain Services Best Practices Analyzer, AD DS BPA) no data can be be gathered using the name of the forest and the domain controller DC2008R2-2. I appreciate any suggestions, this really bothers me.

    Read the article

  • How to configure a trusted connection between IIS 7 and SQL Server 2005?

    - by user1180652
    How do configure a trusted connection between IIS 7 and SQL Server 2005? My webapp was working fine with Windows Authentication enabled in IIS. Now, in order to solve a problem, we need to use a trusted connection. Unfortunately, enabling the trusted connection in the web.config broke the webapp. Oddly enough, when I run this application with trusted connection from my local dev machine (using the Cassini web server) IIS (Windows Server 2008) is running on one machine. The database (SQL Server 2005 but could migrate to 2008) is running on another machine. We are on a Windows domain running AD. All traffic is within our own firewall - no public access. Beyond that, I can't provide much info but I can find it. We're very "compartmentalized" (we have server people, security people, oracle people, SQL Server people, etc.) Thanks! Update 02/14/2012 0902: The webapp is now functional (app no longer broken) but the main issue is still unresolved. Now I have the app's application pool running as a domain account with permissions on the SQL Server box and IIS box. We were using this account to run the application but, and here's the problem, we need to log the real user name that made a change. When using the service account, the name of that service account appeared in the audit tables, making the auditing quite useless. So, not I'm at least running again. The connection string in the web.config is using "Trusted_Connection=True", the appPool is using a domain account with access to both boxes, BUT when I make a change (logged in as me) the name of the service account (appPool identity) is still logged in the audit tables. I also manually granted full permissions to the service account on the webapp folder. What do I need to do in order to log my name, not the service account, in the audit tables? Everything I'm reading says I need to establish a trusted connection between the two servers.

    Read the article

  • Save and restore multiple layers within a Photoshop action that flattens

    - by SuitCase
    I'm editing comic pages with layers - "background", "foreground", "lineart" and "over lineart". I have a Photoshop action that includes a Mode-Bitmap command, which requires the image to be flattened. I need this part of the action because I use the Halftone Screen method of reducing the greyscale image to bitmap on the "background" layer, creating a certain effect. I am pretty sure there is no filter or anything else that gives the same effect. After the mode is changed to bitmap, my action changes things back to greyscale for further changes. This poses a problem. I only want to do the bitmap mode change on the background layer, and after I do the change I want to restore the layer structure as it was - with the foreground, lineart and over lineart layers back above the now-halftoned background. My current method of saving these layers and restoring them is clumsy. My action is able to automatically save the "foreground" layer by selecting it, cutting it, then pasting it back in after the mode changing is over. But, for the "ink" and "over ink" layers, I have to manually cut these layers, paste them into a new document, and later re-cut and re-paste after running my action. This is so clunky! What I would like to know is if it's possible to set aside my layers in an automated way, and then bring them back in, also in an automated way. An ugly (but functional) solution would be to replicate my actions of creating new documents and pasting them temporarily there, but I don't think Photoshop allows you to do things outside of your current document with an action. It seems to me that the only way to do what I want is to use the "hack" of incorporating the clipboard into the action as a clever hack, but that leaves me stuck as I have two more layers that can't fit onto that same clipboard. Help or suggestions would be appreciated. I can keep on doing it manually, but to have a comprehensive action would save me a ton of time.

    Read the article

  • Western Digital My Book not recognized by WD software

    - by Kari
    A few years ago I bought a WD My Book Pro 2. It worked fine for a while, then one of the drives failed and I sent it back to be replaced under warranty. I never got around to setting up the new one when I got it back. I finally ran out of room on my internal drive, so I tried to use the external - no go. Both drives spin up, but aren't recognized by either Disk Utility (Mac) or the WD Drive Manager. I tried on a PC as well with fresh software. Then I pulled the drives out of the enclosure (warranty is already expired) and plugged them straight into the PC. Both recognized and working 100% in RAID0. BIOS recognizes either disk as functional; Windows only sees them when both are connected due to the RAID which I can't change without the WD software. The drives that were returned to me are the "Green" drives which I've read are NOT recommended for RAID. Is it possible that this is interfering with them reading externally? Any other ideas? My main computer is a laptop so using them internally isn't an option :(

    Read the article

  • About to go live: virtual dedicated server or cloud?

    - by morpheous
    I am about to launch my startup company, and we will be going live in a few weeks time. We have really tight budgetary constraints, since we are bootstrapping - and would prefer not to raise external capital. I cant use shared hosting because I need more control of the server machine (for technical reasons - e.g. using proprietary extensions to PHP, Apache and in the database layer as well) - but want to control costs and dont want to go fully private server route, until we have determined the market size etc. So the only real alternatives AFAIK is between virtual server and the cloud. At the moment, cloud services seem a bit "vague" to me. My understanding is that they allow an entity to outsource its IT infrastructure, which in my mind (at least), is indistinguishable from what a hosting provider provides (at least from a functional point of view) - I would like to seek some clarification on exactly what the difference between the two is. Back to my original question, my requirements are: IT infrastructure that can scale with growth Ability to have control of the machine (for e.g. to install our internally developed libraries etc) Backup software that is flexible and comprehensive enough (yet simple to use), that allows a (secured) backup strategy to be implemented. On this issue, I have always wondered where the actual backed up data was stored (since the physical machines are remote, and one cant get access to any actual tapes etc backed onto). I would also like some advice and recommendations in this area. Regarding data size, I am expecting the dataset to be increasing by a few megabytes of data (originally, say 10Mb, in about a years time, possibly 50Mb) every day. As an aside, I have decided to deploy on a Debian server (most of my additional libraries etc were compiled and built on a Debian machine). Mindful of all of the above, I would like some advice (and reason) as to which route to take. I would also like some advice on which backup software to use, from people who have walked a similar path.

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >