Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 431/750 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • Implementing a form of port knocking + Phone Factor = 2 Factor auth for RDP?

    - by jshin47
    I have been looking into how to secure a publicly-available RDP endpoint and want to implement our two-factor authentication RADIUS server, PhoneFactor. I would like to implement the following process: User opens up web app in browser In web app, user enters username + password, initiates RADIUS auth Phone factor calls user to complete auth Once user is authenticated, port 3389 is opened on user's IP on pfSense firewall. After some amount of time, firewall rule is removed for that IP I would like to know the following: Is this a typical setup? If it is a bad idea, please explain why. If it is possible, are there any packages that assist with this? Specifically, the third step, where the appropriate firewall rule would need to be added... Edit: I am aware of TS Web Gateway, but I want the users to be able to use the traditional RDP client...

    Read the article

  • High Availability Configuration using Heartbeat and Pacemaker

    - by pradeepchhetri
    I have the following setup: I have configured high availability between two load balancers (HAProxy) so that if HAProxy1 get down, the floating IP gets transferred to the other load balancer HAProxy2, hence all the clients will get the response from HAProxy2, which at the back-end is doing LB among the sme two webserver. This is for removing the single point of failure in case of only one HAProxy. Whenever I stops the hearbeat in HAProxy1, the floating IP goes to HAProxy2. But I want to configure such that whenever the process haproxy goes down, the floating IP should get assigned to HAProxy2. Can someone tell me how to implement it ?

    Read the article

  • Send as another user / email address in Exchange 2010

    - by adamo
    In my setup there exist user1, ..., [email protected]. Mail for example.com is handled by Exchange 2010 and all the users user Outlook 2010. There also exists a Standard Distribution list named [email protected]. Is it possible to have some of the users being able to send email with [email protected] as the sender address? Can the sender's GECOS be different too when this happens so that the recipient sees "Offices of Example.com" instead of "User Name X"? Sometimes the secretaries need to send stuff as "the office" and not as theirselves ...

    Read the article

  • Windows Malicious Software Removal Tool log says it can't do all required actions. Should I be conce

    - by Tom
    Here's what the log file c:/Windows/debug/mrt.log of my Windows 7 install says: WARNING: Security policy doesn't allow for all actions MSRT may require. ->Scan ERROR: resource process://pid:6080 (code 0x00000005 (5)) ->Scan ERROR: resource process://pid:5300 (code 0x00000057 (87)) ->Scan ERROR: resource process://pid:3512 (code 0x00000057 (87)) I use the default setup. I didn't change anything. This is the first time I checked the log file and this warning is in there from the start. Can I do something about it? Or I shouldn't be concerned, because it can do everything what's necessary anyway? Do you have this warning in your logfile?

    Read the article

  • Linux with Windows XP vmware guest unable to access certain Internet hosts

    - by unknown
    hi I have annoying problem. My setup is the following: debian Linux, 64 bits, VMWare workstation 7 host, with Windows XP running as guest. From Firefox, or Internet Explorer, I am unable to access few sites, for example nvidia.com, osdir. Basically get connection timed out, on the other hand ping works to those sites. Moreover, Slashdot loads very very slow and sometimes gets horrible text-only version. everything works fine on Linux host I suspect it has something to do with routing on Linux, I recall having similar problem long time ago, which was fixed by setting something in /proc. I tried setting MTU and TCP window size on Windows lower, but did not help Any idea what is going on?

    Read the article

  • kinit gives me a Kerberos ticket, but no AFS token

    - by Tomas Lycken
    I'm trying to setup access to my university's IT environment from my laptop running Ubuntu 12.04, by (mostly) following the IT-department's guides on AFS and Kerberos. I can get AFS working well enough so that I can navigate to my home folder (located in the nada.kth.se cell of AFS), and I can get Kerberos working well enough to forward tickets and authenticate me when I connect with ssh. However, I don't seem to get any AFS tokens locally, on my machine, so I can't just go to /afs/nada.kth.se/.../folder/file.txt on my machine and edit it. I can't even stand in /afs/nada.kth.se/.../folder and run ls without getting Permission denied errors. Why doesn't kinit -f [email protected] give me an AFS token? What do I need to do to get one?

    Read the article

  • How do I install and use Windows Virtual PC in Windows 8?

    - by KronoS
    I really like the integrated Virtual Machine that Windows had built-in with Windows 7 with Windows Virtual PC. I'm looking to install that again. I'd like to be able to install multiple machines as I did before (XP, Ubuntu, Etc,.) but I can't seem to find Windows Virtual PC for Windows 8 any more. Is it still available?, and if not was there something setup in place to replace it? How do I use it? NOTE TO ALL: While this is an accepted self posted answer feel free to edit, comment, or add an answer of your own. If your answer is better than mine, I will accept. This question was a Super User Question of the Week. Read the blog entry for more details or contribute to the blog yourself

    Read the article

  • RAM caching causes severe performance drops

    - by B T
    I have read plenty of threads on memory caching and the standard response of "large cache is good, it shouldn't effect performance", "the kernel knows best". I have recently upgraded from 12.04 to 12.10 and changed from VirtualBox to VMware Workstation and the performance differences are severe (I suspect it is because of the latter). When I am running my virtual machine the system load monitor graph shows less than 50% memory usage generally. System load indicator is showing me that the rest of my RAM is used in the cache all the time. Plain and simple this is the comparison: BEFORE Cache was very sparingly used, pretty much none of my memory usage was the cache Swappiness was 0 (caused my memory to be used first, then swap only if needed) Performance was quite good and logical RAM was used fully first, caching was minimal. I could run enough software to utilize my full 4GB of RAM without any performance degradation whatsoever Swap space was then used as needed which was obviously slower (I am on a HDD) but was still usable when the current program was loaded into memory AFTER Cache is used to fill the full 4GB as soon as my virtual machine is run Swappiness is 0 (same behaviour as before but cache uses full memory straight away) Performance is terrible and unusable while running Ubuntu software Basic things like changing windows takes 2 minutes + Changing screens happens frame by frame over sometimes up to 5 minutes Cannot run an IDE and VM like I could with ease before So basically, any suggestions on how to take my performance back to how it was before while keeping my current setup? My suspicion is VMWare is the problem, but how do I see what is tied to the use of the cache? Surely there is a way to control this behaviour in software as polished as VMware? Thanks EDIT: Could also be important to note that the behaviour differs depending on whether VMware is open or closed. If VMware is open, then the ram will lock at like 50% and 50% cache and go into the complete lock up mentioned above. Contrastingly, if VMware is closed (after being open), then the RAM will continue to rise as it needs / cache will stay as the complete remaining memory and there is no noticeable performance degradation.

    Read the article

  • Buying Dual Monitors of different size and resolution

    - by rutherford
    I'm about to go choose a dual monitor setup: 1) Is there any reason why I can't just walk out and buy the two TFT screens I like (a wide screen and a 'portrait' screen) and combine them? Mainly wide screen would be for gaming, and portrait for browsing. I'd want the desktop stretching from one to the other (ie drag pointer, apps from one screen to the other) 2) Also do I need separate gfx cards for each monitor or can one cover both? any performance cost? 3) And can I have separate background images for each, seeing as they'll be different resolutions?

    Read the article

  • Four Proven Advantages of Online Learning | Outside Cost, Accessibility or Flexibility

    - by Mohit Phogat
    Coursera believes that online courses complement and supplement traditional education (versus a common misconception online will “replace” traditional.) Our research shows that Coursera’s platform, when used concurrently with a traditional classroom setup, is ideal for “blended learning” (i.e., students watch lectures pre-class, then class-time focuses on interactive work and discussion.) Additionally, we agree with Brad Zomick of SkilledUp—an online learning aggregator—who acknowledges an online course “isn’t an alternative at all but rather a different path with its own rewards.” The advantages of Coursera and our apps for mobile were straightforward and conspicuous from the start: we’re free, open, and flexible to learners’ unique needs and style. Over the past two years, however, the evidence proves there are many more tangible benefits to open, online learning. In SkilledUp’s “The Advantages of Online Courses [Infographic]”–crafted from findings of leading educational research–four observations stand out from the overt characteristics: Speedier Learning - “Research shows that online students achieve same or better learning results in about half the time as those in traditional courses” More Active, Engaged & Motivated - Learners thrive “when working with coursework that is challenging but within their capacity to master.” Tangible Skill Building - with an “improved attitude toward learning” Better Teaching Quality - Courses are taught by experts, with various multimedia and cutting-edge technology, and “are usually better organized than traditional courses” This is only the beginning, Courserians! Everyday we hear your incredible stories on how open online courses enrich your lives and enhance your careers. Meanwhile we study the steady stream of scientific, big-data research proving their worth on a large-scale (such as UPenn’s latest research on the welcomed diversity in Coursera-hosted Wharton MBA courses.) Our motto “Learning without Limits” reminds us that open, online courses give tremendous opportunity to those that might not otherwise have access (or time, or money) to study at a high-caliber institution. Source: Coursera

    Read the article

  • SBS 2003 R2 install errors

    - by piagetblix
    I just installed sbs2003 R2 as a learning lab box on my home network. The install seemed to take quit a while with a bit of shuffling between disks 4 and 5. The continue setup checklist showed the red x besides the exchange install and finally gave me an error about the "schema needing extending run, forest /adprep" and told me at the end that the install did not succeed. After a longer than normal reboot it comes up and seems to be OK. What logs do i need to check to see what went wrong? Should i reinstall and try burning the iso's to new media? cheers

    Read the article

  • Do you know the minimum builds to create on any branch?

    - by Martin Hinshelwood
    You should always have three builds on your team project. These should be setup and tested using an empty solution before you write any code at all. Figure: Three builds named in the format [TeamProject].[AreaPath]_[Branch].[Gate|CI|Nightly] for every branch.   These builds should use the same XAML build workflow; however you may set them up to run a different set of tests depending on the time it takes to run a full build. Gate – Only needs to run the smallest set of tests, but should run most if not all of the Unit Test. This is run before developers are allowed to check-in CI – This should run all Unit Tests and all of the automated UI tests. It is run after a developer check-in. Nightly – The Nightly build should run all of the Unit Tests, all of the Automated UI tests and all of the Load and Performance tests. The nightly build is time consuming and will run but once a night. Packaging of your Product for testing the next day may be done at this stage as well. Figure: You can control what tests are run and what data is collected while they are running. Note: We do not run all the tests every time because of the time consuming nature of running some tests, but ALL tests should be run overnight. Note: If you had a really large project with thousands of tests including long running Load tests you may need to add a Weekly build to the mix.     Figure: Bad example, you can’t tell what these builds do if they are in a larger list   Figure: Good example, you know exactly what project, branch and type of build these are for.   Technorati Tags: SSW,SSW Rules,VS2010,VS ALM,Team Build 2010,Team Build

    Read the article

  • BIOS password and hardware clock problems

    - by Slartibartfast
    I have HP 6730b lap top. I've bought it used and installed (Gentoo) linux on it. BIOS is protected with password, and guy I bought it from said "I've tweaked BIOS from Windows program, it never asked me for password". I've tried to erase password by removing battery, but it's still there. What did get erased obviously is hw clock. This is what hapends: a) I can leave lap top in January 1980 and it works b) I can correct system time, but boot wil fail with "superblock mount time in future" from where I need to manually do fsck and continue boot c) I can correct system time and sync it with hwclock -w but than it will behave as b) and it will reset BIOS time to 1.1.1980 00:00 So I need either a way to bypass a BIOS password (wich after lot of googling seems impossible),a way to persist a clock, or a setup that will enable hw clock in eighties, system clock in present time and normal boot.

    Read the article

  • Mails bounce because of invalid character ('@') in username

    - by user1598585
    I have a working exim setup with virtual users, working alright, except for when I try to send email to certain servers. These servers reject my emails because of #5.1.3 Invalid character ('@') in username. The offending header parts seem to be: Return-path: <"[email protected]"@smtp.example.com> and ...(envelope-from <"[email protected]"@smtp.example.com>)... The problem is that I cannot find where and why the usernames are being generated like this. My router for submission is: dnslookup: driver = dnslookup domains = ! +local_domains transport = remote_smtp ignore_target_hosts = 0.0.0.0 : 127.0.0.0/8 no_more And the respective transport: remote_smtp: driver = smtp What can be producing this problem?

    Read the article

  • StarterSTS 1.5

    - by Your DisplayName here!
    I have the 1.5 version of StarterSTS sitting here for quite some time now. But I was always reluctant to release it. Some of the reasons are: too many new features for a single (small) version change. to many features that are optional, like bridged authentication and thus make the code very complex. the way I implemented Azure integration adds a dependency on the Azure SDK, even for “on-premise” installations. I don’t like that. the fact I am using some WebForms bits and some WCF bits, the URL structure got messy. WebForms also don’t help a lot in testability All of the above reasons together plus the fact that I am the only architect, developer and tester on this project made me come to the conclusion that I will cancel this release. But wait… StarterSTS 1.5 is fully functional. We use both the on-premise and Azure versions internally “in production”. Cancelling means I will release the latest source code on Codeplex – but will not mark it as a “recommended release”. I also won’t produce updated screen casts and docs. Bu the setup is very similar to earlier versions. Feel free to use and customize 1.5 and give me feedback. On the good news front, I am working on a new version – welcome thinktecture IdentityServer. This version is based on MVC3 and the routing architecture, removed a lot of the clutter, has a SQL CE4 based configuration system, is more extensible – and in overall just cleaner. I will be able to upload CTPs very soon.

    Read the article

  • Allow JMX connection on JVM 1.6.x

    - by Martin Müller
    While trying to monitor a JVM on a remote system using visualvm the activation of JMX gave me some challenges. Dr Google and my employers documentation quickly revealed some -D opts needed for JMX, but strangely it only worked for a Solaris 10 system (my setup: MacOS laptop monitoring SPARC Solaris based JVMs) On S11 with the same opts I saw that "my" JVM listening on port 3000 (which I chose for JMX), but visualvm was not able to get a connection. Finally I found out that at least my S11 installation needed an explicit setting of the RMI host name. This what finally worked:         -Dcom.sun.management.jmxremote=true \        -Dcom.sun.management.jmxremote.ssl=false \        -Dcom.sun.management.jmxremote.authenticate=false \        -Dcom.sun.management.jmxremote.port=3000 \        -Djava.rmi.server.hostname=s11name.us.oracle.com \ Maybe this post saves someone else the time I spent on research 

    Read the article

  • Apache not using the right SSL certificate [on hold]

    - by user2420318
    In my apache2 setup, I have one VirtualHost for my main site, and another for a static content site, like downloads, css, etc. I have ssl certificates for both, and the static content one is under a subdomain of the main site. I have configured the four virtualhosts altogether, as both sites need SSL ones as well. When I only had 1 SSL site, everything was OK, but now with the second, the first site uses the second site's certificate, even though it is told specifically to use its own in the VirtualHost section. I honestly have no idea why apache would do this. Any ideas? I have a feeling there may be some default/global setting or something that are set for some odd reason. I am using different IPs for the Virtual hosts.

    Read the article

  • can connect to wifi router but not the internet 12.10

    - by harsh
    I believe i have tried every trick in the book ! I am able to access my router from my ubuntu desktop but not even a single site on the internet. i have installed the drivers for my pci wifi card following this procedure : http://steveswinsburg.wordpress.com/2011/03/12/how-to-install-a-d-link-dwa-525-wireless-network-card-in-ubuntu-10-04/ Earlier it didn't connect to the internet and kept prompting me for the password. after i installed the driver it got connected ! but now i cant access the internet and can only access my router. i deleted the resolv.conf file. I have also tried the "rkill" command and it shows it is unlocked. I have even tried pinging the ip's 8.8.8.8 & 8.8.4.4 and i get replies from both. I have also added them on my wifi configuration setup on ubuntu desktop. I am new to ubuntu but learning fast :-) These are the things i don't like about ubuntu, the simple tasks are difficult for a person coming from windows. I am using ubuntu 12.10. please reply at the earliest i would be very thankful.

    Read the article

  • Is there a special way to set up Garry's Mod for Mac? [closed]

    - by credford
    I just downloaded Garry's Mod from Steam and every time I try to play Single Player on the first map, it crashes on the Loading Resources section of the progress bar. Here is some of the exception information the system prints out: Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION_FAILURE at 0x000000001acec4d0 Crashed Thread: 4 When I ran it on the Windows side (Bootcamp), Windows 7 required me to give Garry's Mod some permissions. I wasn't asked to do this on the Mac side. Maybe that's it? Is there some kind of special setup for the Mac?

    Read the article

  • Which hardware changes require operating system reinstallation?

    - by Mark
    I'm about to upgrade my computer but might keep some parts. Just wandering what I would have to keep to prevent me having to reinstall my OSs, at the moment I have a dual boot setup with ubuntu and windows 7. I'm pretty sure you can't just take your hard drive with the OS on it and put it into a different box and keep going (can you?) but I know you can change the graphics cards, secondary hard drives and ram with out a problem. So what is it that you can't change? The CPU? Motherboard? Thanks for any replies

    Read the article

  • Oracle E-Business Suite Plug-in 4.0 Released for OEM 11g (11.1.0.1)

    - by Steven Chan
    [Feb. 25, 12:40 PM Update: Removed incorrect references to RHEL 3, SLES 9, HP-UX 11.11, Solaris 8]We're very pleased to announce the release of Oracle E-Business Suite Plug-in 4.0, an integral part of Application Management Suite for Oracle E-Business Suite.The management suite combines features in the standalone Application Management Pack (AMP) for Oracle E-Business Suite and Application Change Management Pack (ACMP) for Oracle E-Business Suite with Oracle's real user monitoring and configuration management capabilities.  The features that were available in the standalone Application Management Pack and Application Change Management Pack for Oracle E-Business Suite are now packaged into the Oracle E-Business Suite Plug-in 4.0.  The Oracle E-Business Suite Plug-in 4.0 is now fully certified with Oracle Enterprise Manager 11g Grid Control.  This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support.  The Oracle E-Business Suite Plug-in is released via patch 8333939.  For the AMP and ACMP 4.0 installation guide, see:Getting Started with Oracle E-Business Suite Plug-in Release 4.0 (Note 1224313.1)General AMP & ACMP improvementsOracle Enterprise Manager 11g Grid Control SupportApplication Management Pack 4.0 and Application Change Management Pack 4.0 for Oracle E-Business Suite are certified with Oracle Enterprise Manager 11g Grid Control Release 1 (11.1.0.1.0).Built-in Diagnostic Ability Release 4.0 has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration process.

    Read the article

  • Supervisor HTTP Server Port Issue.

    - by Catalina
    I have supervisor setup to manage a few processes. It works perfectly fine when I boot my server, however when I stop it and try to start it again it fails and give's me this error msg: * Starting Supervisor daemon manager... Error: Another program is already listening on a port that one of our HTTP servers is configured to use. Shut this program down first before starting supervisord. For help, use /usr/bin/supervisord -h ...fail! I'm running nginx on port 80 and 4 web servers on ports 8000, 8001, 8002, 8003 Does anyone have any idea of what is going on? When I reboot everything works fine.

    Read the article

  • Monitoring remote laptops

    - by kaerast
    We're looking for something to monitor around 30 remote laptops that are constantly out on the road, never returning to base except for when there are serious hardware faults that need repairing. These laptops won't always be connected to the internet, they'll have mobile broadband and may work offline most of the time. They will be running a mixture of Windows XP, Vista and 7 and there is currently no server setup. We're primarily interested in making sure that Windows Updates and antivirus updates are happening, and I guess we should also be monitoring remaining disk space, what software is installed and ideally hardware health. It might also be nice if we could gain remote access to perform work on them. My main reason for wanting to monitor them is that it's going to be a real pain to get them back to base if anything goes wrong, so I want to be proactive in ensuring they last as long as possible. Can you recommend what I should be monitoring to ensure a long life? What tools would you use to monitor and maintain these computers?

    Read the article

  • Allow VMWare client to connect only via VPN

    - by Frank Meulenaar
    I have a VMWare (currently using Workstation on Vista, but thinking about switching to ESX) client with Windows XP. I've installed OpenVPN in the client and it connects to the corporate VPN server. I want to make sure that all traffic from the Windows XP machine goes trough this VPN tunnel, but I can't change any settings on the corporate VPN server. Is it possible to restrict the internet connectivity of the Windows XP client in such a way that it can only send packets to the IP of the corporate VPN server? In that way it'd be impossible for packets to bypass the tunnel. I've looked at NAT configurations but couldn't see how I could make this setup.

    Read the article

  • Nvidia API mismatch

    - by Oli
    I had planned a day of relaxing with Portal 2 but on starting Steam (for the first time in a couple of weeks) I was greeted with the following message in the terminal: Error: API mismatch: the NVIDIA kernel module has version 270.41.19, but this NVIDIA driver component has version 270.41.06. Please make sure that the kernel module and all NVIDIA driver components have the same version. I'll confess I don't really know what it's talking about when it says driver. The verion of nvidia-current is 270.41.19. I thought that was the driver and module, all in one. I use the X-SWAT PPA and I have noted that the nvidia-settings package has boosted to 275.09.07. As this is just a settings application, I don't think this mismatch has anything to do with this. It's also not the same version as the problem being described. I'd rather not purge back to the standard Nvidia driver as it's less than stable on my GTX580. I would accept an answer that takes the manual setup and makes it recompile when the kernel recompiles (ie, some DKMS wizardry) but it has to work. I don't want to drop back to text-mode every time I restart after a kernel upgrade. Edit: Minecraft works without a single complaint about driver versions. Penumbra dies with roughly the same error when entering a game.

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >