Search Results

Search found 21331 results on 854 pages for 'require once'.

Page 413/854 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • Add a Real-Time Earth Wallpaper App to Ubuntu with xplanetFX

    - by Asian Angel
    Are you tired of the same old wallpaper on your Ubuntu desktop? Now you can go from blah to literally spacious, real-time styled views of Earth with the xplanetFX Wallpaper App for Linux. You can conveniently access the “file type” downloads, screenshots, and jump-to links all on the front page. For our example we downloaded the .deb setup file on our system. The setup file will need to download three additional files to complete the setup process. After those are downloaded all dependencies will have been met and you can complete the installation process. Once that is done you can find xplanetFX by going to the Accessories Section of your Ubuntu Menu. This is what the main control window looks like when you start xplanetFX for the first time. You should take a few moments to look through the various tabs and tweak the settings for items like location, screen resolution, timing, auto-start, etc. When you are done click on Execute and within a few moments your desktop will have a fresh new look! Note: It took ~30 seconds for the display to activate on our system. Have fun with xplanetFX! xplanetFX Homepage [via OMG! Ubuntu!] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware Add a Real-Time Earth Wallpaper App to Ubuntu with xplanetFX The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope

    Read the article

  • Windows 7 / Media Center lock problem

    - by ICTdesk.net
    Hi, we have a Win7 with Media Center that is connected to a large screen. It displays a corporate slideshow near the coffeemachine. When the computer is started it automatically logs in and starts the slideshow. The problem is that on irregular time intervals (once a day, some days not) the slideshow stops and it switches to the login screen and shows which user is logged on. We don't know why this is happening. We first thought someone on the network was trying to open RDC connection to the machine, because it is the same sympthon like when you take over a machine by RDC with a different user, but nobody is doing that. Screensaver is disabled, energy settings "always on" Anybody ideas why this is happening?

    Read the article

  • How Facebook's Ad Bid System Works

    - by pnongrata
    When you are creating an ad on Facebook, you are provided with a "suggested bid" range (e.g., $0.90 - $2.15 USD). According to this page: The suggested bid range is there to help you pick a maximum bid so your ad will be successful. It’s based on how many other advertisers are competing to show their ad to the same audience as you are. I'm interested in understanding what's actually going on (technically) under the hood here. Say a user logs into Facebook. On the server-side, it the HTTP request that the user's browser sent (as part of the login) is handled, and the server needs to figure out which ad to display back to the user. I assume this is where the "bidding" system comes into play? Say that, based on this user's demographics, and based on the audience targeting that several competing advertisers designed their campaign with, let's pretend that Facebook sees a pool of 20 different ads it could return. How does this bidding system help Facebook determine which of the 20 ads it returns to the client-side? I'm guessing that advertisers who "bid more" get prioritized over those who "bid less". But when does this bidding take place? How often does an advertiser need to re-bid? How long is a bid binding for? Once I understand these usage-related concepts behind ads, it will probably be obvious between which of the following "selection strategies" the backend is using: Round robin Prioritized round robin Randomized (doubtful) History-based MVP-based Thanks to anyone who can help point me in the right direction and explain what these suggested bid systems are and how they work.

    Read the article

  • Shared Folders in VirtualBox on Windows 7

    In my adventures with VirtualBox, my latest victory was in figuring out how to share folders between my host OS (Windows 7) and my virtual OS (Windows Server 2008).  Im familiar with VirtualPC and other such products, which allow you to share local folders with the VM.  When you do, they just show up in Windows Explorer and all is good.  However, after configuring shared folders in VirtualBox like so:   I couldnt see them anywhere within the machine. Where are Shared Folders in a VirtualBox VM? Fortunately a bit of searching yielded this article, which describes the problem nicely.  It turns out that there is a magic word you have to know, and that is the share name for the host OS: \\vboxsrv Once you know this, mapping shared folders is straightforward.  From Windows Explorer, click on the Map network drive option, and then map a drive to \\vboxsrv\YOURSHAREDFOLDER Like so: With that, its easy to share folders between the client and host OS using VirtualBox.  The reason I didnt simply use a standard network share to my host OS machine name is that both guest and host are in a VPN, and the VPN is over the Internet and in a different country, so when I went that route my files were (apparently) traveling from host to guest by way of the remote VPN network, rather than locally.  Using the Shared Folders feature dramatically sped up my ability to transfer files between Host and Guest machines. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Bizarre image loading problem from apache2

    - by NateDSaint
    Users have complained a few times about seeing a bizarre set of pink or green stripes on our webpage. At first I thought there were a rash of video card outages, but then someone sent me a screenshot from their browser (IE8). I later saw the same thing, but with slightly different colors on Chrome. Users have experienced this on their iPads and iPhones (iOS Safari). Because I've optimized the site to cache images, the bad image stays around until you clear your cache, so once you do, it resolves itself. My assumption is that the transmission of the image is being cut off mid-stream and then staying that way, but I can't for the life of me figure out why. Here's what I've checked: Header length is being sent, and transmission looks okay (wget sample below): wget http://www.superiorlivestock.com/templates/sla2/images/wallbg2.jpg --2012-04-05 08:46:00-- http://www.superiorlivestock.com/templates/sla2/images/wallbg2.jpg Resolving www.superiorlivestock.com (www.superiorlivestock.com)... [ip redacted] Connecting to www.superiorlivestock.com (www.superiorlivestock.com)|[ip redacted]|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 45926 (45K) [image/jpeg] Saving to: `wallbg2.jpg' Images are not being served gzipped (apache conf below): SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary The site is www.superiorlivestock.com, and here's a sample of the bad page load: Is there something obvious I'm missing? Am I saving my images in the wrong format somehow?

    Read the article

  • Unit and Integration testing: How can it become a reflex

    - by LordOfThePigs
    All the programmers in my team are familiar with unit testing and integration testing. We have all worked with it. We have all written tests with it. Some of us even have felt an improved sense of trust in his/her own code. However, for some reason, writing unit/integration tests has not become a reflex for any of the members of the team. None of us actually feel bad when not writing unit tests at the same time as the actual code. As a result, our codebase is mostly uncovered by unit tests, and projects enter production untested. The problem with that, of course is that once your projects are in production and are already working well, it is virtually impossible to obtain time and/or budget to add unit/integration testing. The members of my team and myself are already familiar with the value of unit testing (1, 2) but it doesn't seem to help bringing unit testing into our natural workflow. In my experience making unit tests and/or a target coverage mandatory just results in poor quality tests and slows down team members simply because there is no self-generated motivation to produce these tests. Also as soon as pressure eases, unit tests are not written any more. My question is the following: Is there any methods that you have experimented with that helps build a dynamic/momentum inside the team, leading to people naturally wanting to create and maintain those tests?

    Read the article

  • Problem opening XWindows programs with xming and SSH Secure Shell

    - by Brian
    I've installed SSH Secure Shell and xming on my laptop running Windows 7 (64-bit). I'm having trouble starting X Windows applications from the SSH console. I've been able to do it in the past. I've pretty much determined that it's not a server issue because I've tried it on two different servers (both servers are running RHEL 5). Running "echo $DISPLAY" on either server gave me "localhost:10.0". My XLaunch configuration settings are: Multiple Windows, 10 (display number), and Start no client. Once xming has launched, I'll try to execute something like "firefox" and I get this back: The application 'firefox' lost its connection to the display localhost:10.0; most likely the X server was shut down or you killed/destroyed the application. I've already checked to make sure that the X server is running and it is: root 12579 2689 0 Feb14 tty7 00:04:23 /usr/bin/Xorg :0 -br -audit 0 -auth /var/gdm/:0.Xauth -nolisten tcp vt7 Additionally, X11 Tunneling has been enabled in SSH as well as SSH 2 connections.

    Read the article

  • Using Sandcastle to build code contracts documentation

    - by DigiMortal
    In my last posting about code contracts I showed how code contracts are documented in XML-documents. In this posting I will show you how to get code contracts documented with Sandcastle and Sandcastle Help File Builder. Before we start, let’s download Sandcastle tools we need: Sandcastle Sandcastle Help File Builder Install Sandcastle first and then Sandcastle Help File Builder. Because we are generating only HTML based documentation we upload to server we don’t need any other tools. Of course, we need Cassini or IIS, but I expect it to be already there in your machine. Open your project and turn on XML-documentation for project and contracts. Now let’s run Sandcastle Help File Builder. We have to create new project and add our Visual Studio solution to this project. Now set the HelpFileFormat parameter value to be Website and let builder build the help. You have to wait about two or three minutes until help is ready. Take a look at your documentation that Sandcastle generated – you see not much information there about code contracts and their rules. Enabling code contracts documentation Now let’s include code contracts to documentation. Follow these steps: Open Sandcastle folder and make copy of vs2005 folder. Open CodeContracts folder (c:\program files\microsoft\contracts\) and unzip the archive from sandcastle folder. Copy all unzipped files to Sandcastle folder. Create (yes, create new) and build your Sandcastle Help File Builder documentation project again. Open help. In my case I see something like this now. As you can see then contracts are documented pretty well. We can easily turn on code contracts XML-documentation generation and all our contracts are documented automatically. To get documentation work we had to use Sandcastle help file fixes that are installed with code contracts and if we had previously Sandcastle Help File Builder project we had to create it from start to get new rules accepted. Once the documentation support for contracts works we have to do nothing more to get contracts documented.

    Read the article

  • postgres memory allocation tuning 2

    - by pstanton
    i've got a Ubuntu Linux system with 12Gb memory most of which (at least 10Gb) can be allocated solely to postgres. the system also has a 6 disk 15k SCSI RAID 10 setup. The process i'm trying to optimise is twofold. firstly a single threaded, single connection will do many inserts into 2-4 tables linked by foreign key. secondly many different complex queries are run against the resulting data, using group by extensively. this part especially needs to be optimised. i have four of these processes running at once in order to make use of the quad core CPU, therefore there will generally be no more than 5 concurrent connections (1 spare for admin tasks). what configuration changes to the default Postgres config would you recommend? I'm looking for the optimum values for things like work_mem, shared_buffers etc. relevant doco thanks!

    Read the article

  • Should these concerns be separated into separate objects?

    - by Lewis Bassett
    I have objects which implement the interface BroadcastInterface, which represents a message that is to be broadcast to all users of a particular group. It has a setter and getter method for the Subject and Body properties, and an addRecipientRole() method, which takes a given role and finds the contact token (e.g., an email address) for each user in the role and stores it. It then has a getContactTokens() method. BroadcastInterface objects are passed to an object that implements BroadcasterInterface. These objects are responsible for broadcasting a passed BroadcastInterface object. For example, an EmailBroadcaster implementation of the BroadcasterInterface will take EmailBroadcast objects and use the mailer services to email them out. Now, depending on what BroadcasterInterface implementation is used to broadcast, a different implementation of BroadcastInterface is used by client code. The Single Responsibility Principle seems to suggest that I should have a separate BroadcastFactory object, for creating BroadcastInterface objects, depending on what BroadcasterInterface implementation is used, as creating the BroadcastInterface object is a different responsibility to broadcasting them. But the class used for creating BroadcastInterface objects depends on what implementation of BroadcasterInterface is used to broadcast them. I think, because the knowledge of what method is used to send the broadcasts should only be configured once, the BroadcasterInterface object should be responsible for providing new BroadcastInterface objects. Does the responsibility of “creating and broadcasting objects that implement the BroadcastInterface interface” violate the Single Responsibility Principle? (Because the contact token for sending the broadcast out to the users will differ depending on the way it is broadcasted, I need different broadcast classes—though client code will not be able to tell the difference.)

    Read the article

  • Create and Track Your Own License Keys with PowerShell

    - by BuckWoody
    SQL Server used to have  cool little tool that would let you track your licenses. Microsoft didn’t use it to limit your system or anything, it was just a place on the server where you could put that this system used this license key. I miss those days – we don’t track that any more, and I want to make sure I’m up to date on my licensing, so I made my own. Now, there are a LOT of ways you could do this. You could add an extended property in SQL Server, add a table to a tracking database, use a text file, track it somewhere else, whatever. This is just the route I chose; if you want to use some other method, feel free. Just sharing here. Warning Serious problems might occur if you modify the registry incorrectly by using Registry Editor or by using another method. These problems might require that you reinstall the operating system. Microsoft cannot guarantee that these problems can be solved. Modify the registry at your own risk. And this is REALLY important. I include a disclaimer at the end of my scripts, but in this case you’re modifying your registry, and that could be EXTREMELY dangerous – only do this on a test server – and I’m just showing you how I did mine. It isn’t an endorsement or anything like that, and this is a “Buck Woody” thing, NOT a Microsoft thing. See this link first, and then you can read on. OK, here’s my script: # Track your own licenses # Write a New Key to be the License Location mkdir HKCU:\SOFTWARE\Buck   # Write the variables - one sets the type, the other sets the number, and the last one holds the key New-ItemProperty HKCU:\SOFTWARE\Buck -name "SQLServerLicenseType" -value "Processor" # Notice the Dword value here - this one is a number so it needs that. Keep this on one line! New-ItemProperty HKCU:\SOFTWARE\Buck -name "SQLServerLicenseNumber" -propertytype DWord -value 4 New-ItemProperty HKCU:\SOFTWARE\Buck -name "SQLServerLicenseKey" -value "ABCD1234"   # Read them all $LicenseKey = Get-Item HKCU:\Software\Buck $Licenses = Get-ItemProperty $LicenseKey.PSPath foreach ($License in $LicenseKey.Property) { $License + "=" + $Licenses.$License }   Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Trying to install flash player on ubuntu 12.04

    - by Eric
    I am having trouble installing this program. I do not know how to locate the browser plugins directory, or change the directory in the terminal. Installation instructions Installing using the plugin tar.gz: Unpack the plugin tar.gz and copy the files to the appropriate location. Save the plugin tar.gz locally and note the location the file was saved to. Launch terminal and change directories to the location the file was saved to. Unpack the tar.gz file. Once unpacked you will see the following: + libflashplayer.so + /usr Identify the location of the browser plugins directory, based on your Linux distribution and Firefox version Copy libflashplayer.so to the appropriate browser plugins directory. At the prompt type: cp libflashlayer.so <BrowserPluginsLocation> Copy the Flash Player Local Settings configurations files to the /usr directory. At the prompt type: sudo cp -r usr/* /usr Installing the plugin using RPM: - As root, enter in terminal: rpm -Uvh <rpm_package_file> - Click Enter key and follow prompts Installing the standalone player Unpack the tar.gz file To execute the standalone player Double-click, or enter in terminal: ./flashplayer

    Read the article

  • How can I configure a NameCheap domain to point to an Apache subfolder? [closed]

    - by Serg
    Possible Duplicate: How to make domain point to another web directory? My boss just bought the domain: sergiotapia.me for me, and agree'd the host my Wordpress blog on company servers. We're using Apache (latest version). The domain is purchased on NameCheap.com and the DNS settings are as follows: And when I visit my URL, it's getting redirected to my VPS server without problems. The thing is, I want my blog to appears at once, not have a user select the folder and then see the blog. My Wordpress blog is located at: /var/www/sergiotapia.me On IIS, you would need to edit the Bindings and map a domain to an application. I'm guessing I have to do something similar on Apache. What am I looking for here? Any tips on getting this working correctly?

    Read the article

  • Enterprise Integration: Can Companies Afford It?

    - by Ralph Wheaton
    Each year, my company holds a global sales conference where employees and partners from around the world some together to collaborate, share knowledge and ideas and learn about future plans.  As a member of the professional services division, several of us had been asked to make a presentation, an elevator pitch in 3 minutes or less that relates to a success we have worked on or directly relates to our tag (that is, our primary technology focus).  Mine happens to be Enterprise Integration as it relates Business Intelligence.  I found it rather difficult to present that pitch in a short amount of time and had to pare it down.  At any rate, in just a little over 3 minutes, this is the presentation I submitted.  Here is a link to the full presentation video in WMV format.   Many companies today subscribe to a buy versus build mentality in an attempt to drive down costs and improve time to implementation. Sometimes this makes sense, especially as it relates to specialized software or software that performs a small number of tasks extremely well. However, if not carefully considered or planned out, this oftentimes leads to multiple disparate systems with silos of data or multiple versions of the same data. For instance, client data (contact information, addresses, phone numbers, opportunities, sales) stored in your CRM system may not play well with Accounts Receivables. Employee data may be stored across multiple systems such as HR, Time Entry and Payroll. Other data (such as member data) may not originate internally, but be provided by multiple outside sources in multiple formats. And to top it all off, some data may have to be manually entered into multiple systems to keep it all synchronized. When left to grow out of control like this, overall performance is lacking, stability is questionable and maintenance is frequent and costly. Worse yet, in many cases, this topology, this hodgepodge of data creates a reporting nightmare. Decision makers are forced to try to put together pieces of the puzzle attempting to find the information they need, wading through multiple systems to find what they think is the single version of the truth. More often than not, they find they are missing pieces, pieces that may be crucial to growing the business rather than closing the business. across applications. Master data owners are defined to establish single sources of data (such as the CRM system owns client data). Other systems subscribe to the master data and changes are replicated to subscribers as they are made. This can be one way (no changes are allowed on the subscriber systems) or bi-directional. But at all times, the master data owner is current or up to date. And all data, whether internal or external, use the same processes and methods to move data from one place to another, leveraging the same validations, lookups and transformations enterprise wide, eliminating inconsistencies and siloed data. Once implemented, an enterprise integration solution improves performance and stability by reducing the number of moving parts and eliminating inconsistent data. Overall maintenance costs are mitigated by reducing touch points or the number of places that require modification when a business rule is changed or another data element is added. Most importantly, however, now decision makers can easily extract and piece together the information they need to grow their business, improve customer satisfaction and so on. So, in implementing an enterprise integration solution, companies can position themselves for the future, allowing for easy transition to data marts, data warehousing and, ultimately, business intelligence. Along this path, companies can achieve growth in size, intelligence and complexity. Truly, the question is not can companies afford to implement an enterprise integration solution, but can they afford not to.   Ralph Wheaton Microsoft Certified Technology Specialist Microsoft Certified Professional Developer Microsoft VTS-P BizTalk, .Net

    Read the article

  • Utility or technique for swapping files quickly in Windows

    - by foraidt
    I frequently need to swap one file with another, without overwriting the original. Let's say there are two files, foo_new.dll and foo.dll. I usually rename them the follwing way: foo.dll - foo_old.dll, foo_new.dll - foo.dll, [do something with replaced file], foo.dll - foo_new.dll, foo_old.dll - foo.dll. This is ok for a single file to swap but it becomes tedious when swapping multiple files at once. Is there a Windows (7 and preferrably XP) utility or a technique that simplifies this task and works well when swapping multiple files? I'd prefer to be able to use it from within FreeCommander but Windows Explorer would be ok, too.

    Read the article

  • SSMS Tools Pack 2.0 is out! With huge productivity booster features that will blow your mind and ease your job even more.

    - by Mladen Prajdic
    What better way to end the summer and start those productive autumn days ahead than with a fresh new version of the SSMS Tools Pack. This is a big release with two new features that are huge productivity boosters. First new feature are Tab Sessions. Every SQL tab you open is saved every N (default 2) minutes and is stored in a session. This works similar to internet browser sessions. Once you reopen SSMS you can restores your last session with a click of a button. You even get every window connected to the server it was previously connected to. The Tab History Window looks like this:   The second feature is Execution Plan Analyzer. It is designed to quickly help you find costliest operators by a number of properties. If that's not enough you can easily search through the whole execution plan for whatever you like. And to top it off you can auto analyze the execution plan. The analysis reports various problems the execution plan has and suggests a most common solution. The ultimate purpose of the Execution Plan Analyzer is to make your troubleshooting quicker and easier. It uses a simple user interface that is easy to navigate and is built directly into the execution plan itself. The execution plan analyzer looks like this:   Smaller fixes include a completely redesigned SQL History Search window and various other bug fixes. You can download the new version 2.0 at the Download page. For more detailed feature descriptions go to the main Features Page. Enjoy it!

    Read the article

  • Are there any data remanence issues with flash storage devices?

    - by matt
    I am under the impression that, unlike magnetic storage, once data has been deleted from a flash drive it is gone for good but I'm looking to confirm this. This is actually relating to my smart phone, not my computer, but I figured it would be the same for any flash type memory. Basically, I have done a "Factory Reset" on the phone, which wipes the Flash ROM clean but I'm wondering is it really clean or is the next person that has my phone, if they are savvy enough going to be able to get all my passwords and what not? And yes, I am wearing my tinfoil hat so the CIA satellites can't read my thoughts, so I'm covered there.

    Read the article

  • POST attack on my website

    - by benhowdle89
    Hi, I have a site (humanisms.co.uk) which incorporates a voting system, ie. user clicks "Up" and it sends a parameter to a PHP script via AJAX, the PHP inserts vote into MYSQL db and the new "Up" vote is sent back to the page to update the vote count. This is working great but i've noticed that the number of votes for one of my questions shot up last night. I viewed my webhosts access logs and saw this line: 108.27.195.232 - - [03/Mar/2011:15:20:18 +0000] "POST /vote.php HTTP/1.1" 200 2 "http://www.humanisms.co.uk/" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_6; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.114 Safari/534.16" This is repeated well over 100 times and sometimes more than once a second. Now i know they probably arent sitting there clicking Vote but running some sort of PHP loop? I'm not worried about SQL injection but what can i do to prevent this same IP address from doing this or what can i do in general to avoid this scenario. I should also say that there's no login so anyone can click using the voting system. Thanks

    Read the article

  • backing up a virtual machine

    - by ErocM
    I inquired with the support of justcloud.com telling them that I have a vmware vm that I was wondering if it could be backed up while in use. I can back up the vm once it is shut down but I was wondering if their "shadow copy" would back it up while running. This was their response: Thank you for your email. I am really very sorry but virtual machines can't be backed up for a simple reason that they are virtual, they have virtual memory, not physical memory. Please let me know if there is anything else I can help with. Kind Regards, Barry James User Experience Team www.justcloud.com These are physical files so I wasn't sure I even understood the response. Am I wrong in thinking that a vm can be backed up while in use? Does this response even make sense? I need a cheap alternative to backing up the vm off the server in case it goes down. Any suggestions?

    Read the article

  • Dual LAN Printing

    - by Christopher
    I want to use Ubuntu 10.10 Server in a classroom, a computer lab whose bandwidth is provided by a local cable ISP. That's no problem, though the school network has an IP printer that I want to use. I cannot reach the printer through the cable Internet. But, I have two network cards. How is it possible to use both networks at once? eth0 (static 192.168.1.254) is plugged into a four-port router, 192.168.1.1. On the public side of the four-port router is Internet provided by the cable company. I also have the classroom workstations plugged into a switch. The switch is plugged into the four-port router. The whole classroom is wired into the cable Internet. The other NIC, eth1, could it be plugged into an Ethernet jack in the wall? It uses the school network, and I might receive by DHCP an IP address like 10.140.10.100, with the printer on maybe 10.120.50.10. I was thinking about installing the printer on the server so that it could be shared with the workstations. But how does this work? Can I just plug eth1 into the school network and access both LANs? Thanks for any insight, Chris

    Read the article

  • DAO/Webservice Consumption in Web Application

    - by Gavin
    I am currently working on converting a "legacy" web-based (Coldfusion) application from single data source (MSSQL database) to multi-tier OOP. In my current system there is a read/write database with all the usual stuff and additional "read-only" databases that are exported daily/hourly from an Enterprise Resource Planning (ERP) system by SSIS jobs with business product/item and manufacturing/SCM planning data. The reason I have the opportunity and need to convert to multi-tier OOP is a newer more modern ERP system is being implemented business wide that will be a complete replacement. This newer ERP system offers several interfaces for third party applications like mine, from direct SQL access to either a dotNet web-service or a SOAP-like web-service. I have found several suitable frameworks I would be happy to use (Coldspring, FW/1) but I am not sure what design patterns apply to my data access object/component and how to manage the connection/session tokens, with this background, my question has the following three parts: Firstly I have concerns with moving from the relative safety of a SSIS job that protects me from downtime and speed of the ERP system to directly connecting with one of the web services which I note seem significantly slower than I expected (simple/small requests often take up to a whole second). Are there any design patterns I can investigate/use to cache/protect my data tier? It is my understanding data access objects (the component that connects directly with the web services and convert them into the data types I can then work with in my Domain Objects) should be singletons (and will act as an Adapter/Facade), am I correct? As part of the data access object I have to setup a connection by username/password (I could set up multiple users and/or connect multiple times with this) which responds with a session token that needs to be provided on every subsequent request. Do I do this once and share it across the whole application, do I setup a new "connection" for every user of my application and keep the token in their session scope (might quickly hit licensing limits), do I set the "connection" up per page request, or is there a design pattern I am missing that can manage multiple "connections" where a requests/access uses the first free "connection"? It is worth noting if the ERP system dies I will need to reset/invalidate all the connections and start from scratch, and depending on which web-service I use might need manually close the "connection/session"

    Read the article

  • Best method to organize/manage dependencies in the VCS within a large solution

    - by SnOrfus
    A simple scenario: 2 projects are in version control The application The test(s) A significant number of checkins are made to the application daily. CI builds and runs all of the automation nightly. In order to write and/or run tests you need to have built the application (to reference/load instrumented assemblies). Now, consider the application to be massive, such that building it is prohibitive in time (an entire day to compile). The obvious side effect here, is that once you've performed a build locally, it is immediately inconsistent with latest. For instance: If I were to sync with latest, and open up one of the test projects, it would not locally build until I built the application. This is the same when syncing to another branch/build/tag. So, in order to even start working, I need to wait a day to build the application locally, so that the assemblies could be loaded - and then those assemblies wouldn't be latest. How do you organize the repository or (ideally) your development environment such that you can continually develop tests against whatever the current build is, or a given specific build, while minimizing building the application as much as possible?

    Read the article

  • Specify Credentials to run Powershell Script to Query AD

    - by Ben
    I want to run a powershell script to query AD from a machine that is NOT on the domain. Basically I want to query to see if there is computer account already on the domain for this machine and create it if there is not. Because this has to happen before the machine joins the domain I assume I will need to specify some credentials to enable it to run. (I'm pretty new to Powershell, so apologies if this is a newbie question!) The script I am using to check the account is below, and then once this has run it will join the domain using the computername specified. Can you tell me how to specify some domain credentials to run this section of the script as? Cheers, Ben $found=$false $thisComputer = <SERVICE TAG FROM BIOS> $ou = [ADSI]"LDAP://OU=My Computer OU,DC=myDomain,DC=com" foreach ($child in $ou.psbase.Children ) { if ($child.ObjectCategory -like '*computer*') { If ($child.Name -eq $thisComputer) { $found=$true } } } If ($found) { <DELETE THE EXISTING ACCOUNT> }

    Read the article

  • Fiber-Optic Cable Trick Brings Remote Triggering to Older Flashes

    - by Jason Fitzpatrick
    Many older flashes lack for a jack to input a sync cable and rely exclusively on a simple slave mode triggered by the primary flash. This hack uses a piece of scrap fiber optic cable to trigger the flash in bright conditions. Using a flash as an optical slave indoors isn’t much of a problem, but if you introduce bright light (such as outdoor lighting conditions), the ambient light can overpower the small on-camera flash and render the optical slave function useless. To overcome this, Marcell over at Fiber Strobe (a blog dedicated to cataloging experiments in incorporating fiber optics into photography) came up with a simple work around. By using some foam crafting materials and tape, he whipped up a simple mount for a strand of scrap fiber optic cable to connect between the on-camera flash and the sensor on the slave flash. Once attached it works exactly like as sync cable would, except it’s transmitting a pulse of light instead of a pulse of electricity. Hit up the link below for more pictures and a build guide. DIY Fiber Sync Cord [via DIY Photography] HTG Explains: What Is Windows RT and What Does It Mean To Me? HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux Hack Your Kindle for Easy Font Customization

    Read the article

  • How to loop AHK by user input?

    - by AHKFan
    is there a way to loop a certain script using user input per INPUTBOX? The script below runs only once when i klick the button for it. Is there any way for the script to popup something where it asks for a number for it to loop? Lets say something pops up and i give in "10". Then the script is executed 10 times. I hope it's clear enough to understand what the question is guys :-) myscript: sleep 100 InputBox, testvariable, Enter your Input here,,,350, 120 send 100 send {Tab} sleep 100 send %testvarable% return Thanks for your help in advance.

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >