Search Results

Search found 22301 results on 893 pages for 'software sources'.

Page 630/893 | < Previous Page | 626 627 628 629 630 631 632 633 634 635 636 637  | Next Page >

  • Friday Fun: Doom Triple Pack

    - by Mysticgeek
    Thankfully it was only a 4 day work week, but that is enough to get sick of the TPS reports. Today we go retro and experience three classic first-person PC shooter games with the Doom Triple Pack. Doom Triple Pack The Doom Triple Pack brings you your favorite classic first-person PC shooter games in Flash format. The games include Doom, Heretic, and Hexen…just select which one you want to play. Click on Controls to learn how to navigate your characters through the games.   Each on has in-game options you can use to control the style of play. The ever famous DOOM…each game runs smoothly for what they are provided you have a decent internet connection. If you’re tired of spreadsheets and meetings and want to live some of you favorite retro PC gaming days, the Doom Triple Pack can be a lot of fun. If you’re looking for other fun ways to waste time at the office check out the games in the How-To Geek Arcade. Play the Doom Triple Pack Similar Articles Productive Geek Tips Transform your XP Computer to a Modern LookSupport for Some Versions of Windows is EndingSet Automatic Defrag Options for All Drives in Vista Service Pack 1Friday Fun: Portal, the Flash VersionHow to Play .OGM Video Files in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule

    Read the article

  • How to share problem solving knowledge in a multiteam group?

    - by jonathan
    I've been working in multiteam groups for as long as I'm a webdeveloper, for me a team can be a lonely soldier or several people, generally a company will have multiple teams working in different projects and once the project is out in the wild, any team can perform the maintenance. This is a small picture since I'm not talking only about project wise knowledge, but "craft wise" knowledge, but it gives the picture of how I'm used to work, so: Since we work on modularised teams, sometimes I feel like the teams are too tightly enclosed in their projects, I've seen cases where after an hour of discussion, someone asked the question aloud and other person totally unrelated answered in a much simpler fashion. The problem is not so simple to solve as people tend not to be available all the time, also sometimes people can't afford the time to go through a problem with the "asker", but could do it alone. I've thought about software based solutions, something in the lines of SE, but I'd like to know other programmers opinions on the subject. EDIT I don't know if this is a wikipedia complex, but I feel that Wikis don't encourage the user to actually ask questions, but rather to write articles, and sometimes we don't know the knowledge we need, before needing it.

    Read the article

  • Wireless printer going offline upon entering power save mode

    - by Regmi
    I recently bought a printer, Samsung ML1865W, a wireless laser printer. It works very well, wirelessly until after a while, it goes to power save mode when the printer goes offline. I then cannot connect to the printer (pinging the ip and various ports associated with it do not give any response) until I power cycle the printer. I have checked my wireless settings and my router, I have installed the printer software in WinXP, Win2000 and in Mac and the printer behaves just the same. Any ideas if the printer hardware itself is broken or that its something on my network/application side that's the culprit. If you own the printer, have you had any trouble like that at any time?

    Read the article

  • Do out-of-office replies get sent to the return-path header?

    - by Brian Armstrong
    I always thought that the return-path header was used for bounced messages, so I have my email newsletter software setup to unsubscribe anyone whose email address replies back to the return-path email address. But recently some users started telling me their out-of-office replies where unsubscribing them. Do out-of-office replies get sent to the return-path on some email servers? I could check all emails to that address and see if they have the "message/delivery-status" content-type in one of the parts, but I wasn't sure if this was necessary.

    Read the article

  • RoundhousE now supports Oracle, SQL2000

    - by Robz / Fervent Coder
    RoundhousE, the database migration software that is based on sql scripts has added support for Oracle and SQL 2000.  There have also been numerous other little things, including better logging and a script run errors table. The script errors table captures what went wrong when/if your scripts are not quite up to par or there is some other issue. A special thanks goes out to http://twitter.com/PascalMestdach and http://twitter.com/jochenjonc. They worked hard on this and all I did was provide guidance and help bring it back to the trunk. This is what an entry in the database looks like: This is a preview of new log: ================================================== Versioning ================================================== Attempting to resolve version from C:\code\roundhouse\code_drop\sample\deployment\_BuildInfo.xml using //buildInfo/version. Found version 0.5.0.188 from C:\code\roundhouse\code_drop\sample\deployment\_BuildInfo.xml. Migrating TestRoundhousE from version 0 to 0.5.0.188. Versioning TestRoundhousE database with version 0.5.0.188 based on http://roundhouse.googlecode.com/svn. ================================================== Migration Scripts ================================================== Looking for Update scripts in "C:\code\roundhouse\code_drop\sample\deployment\..\db\TestRoundhousE\up". These should be one time only scripts. -------------------------------------------------- Running 0001_CreateTables.sql on (local) - TestRoundhousE. Running 0002_ChangeTable.sql on (local) - TestRoundhousE. Running 0003_TestBatchSplitter.sql on (local) - TestRoundhousE. -------------------------------------------------- But what are you waiting for? Head out and grab the latest release today!

    Read the article

  • Playing Age of Empires II multiplayer in VirtualBox Over wi fi network

    - by Gaurav_Java
    I installed Age of Empires II (Expansion) in VirtualBox (hosting Windows XP). It works great in single player mode. Unfortunately, I tried playing multiplayer via WI-FI which I created on my Ubuntu machine and can't seem to join games. But when I connected to my WI-FI router other able to connect to my system and we can play multiplayer mode This is what I've done so far to try to resolve the issue: I noticed that the IP address of my virtual machine was 10.0.x.x, While the local IP on Ubuntu is 192.168.x.x, which I figured was a Problem. So I changed from NAT networking to bridged networking in VirtualBox . I turned off the Windows firewall in the virtual machine and don't have any ports blocked by Ubuntu, so no software firewall should be at fault. However I'm still unable to play multiplayer games, and suspect that some kind of networking issue lies at the heart of the problem. I'm not sure what else I would need to change, however. So essentially I was wondering if anyone else here has managed to play AOE2, or any similar game, inside VirtualBox from Ubuntu, and if so what you needed to do to make it possible. Or if anyone has suggestions on where else to look to figure out the problem, I'd appreciate that as well. Unfortunately AOE2 itself doesn't provide any debugging information to troubleshoot the inability to connect to network games. Here MY IP result both for Ubuntu and Virtualbox XP I want to play game on multiplayer mode in virtualbox on my system(Own Created on Ubuntu ) wi-fi on which other can connect and play hope someone will answer this

    Read the article

  • How do I Connect a 30yr-old Tandy 1400LT laptop to the internet?

    - by Clemens Bergmann
    Just for the fun of it, I want to get an old Tandy 1400LT laptop: small monochrome display two floppy drives rs-232c connector "printer" connector connect the thing the internet and use it as an ssh terminal. How would I connect it to the internet? The software should be no problem as it is a 386 hardware. There should be a small linux distribution which can be run on it. But how would I phisically connect the hardware? It has no ethernet port. Has someone experience with Serial/Paralel-to-ethernet converters?

    Read the article

  • MSSQL Backup Question

    - by MJ
    I'm currently taking over for someone who was in charge of backing up over 250 servers on different platforms, until we hire a replacement. The main question I have is: If we use a backup software, such as Symantec backup exec, does this perform the correct backup for MSSQL Server? I was listening to Stack Overflow Podcast, and I heard them talk about you cannot just backup the SQL data files, but you also need the transaction log? So, if we just backup the whole machine, would we be able to recover it correctly, since we would be backing up the data file and the log? Thanks!

    Read the article

  • Microsoft Researchers shows off best Touch Screen ever made. Better than Apple touch screens!

    - by Gopinath
    All the touch devices we have in market today like iPads, iPhones, Samsung tablets and phones, etc.  have a very small issue – 100 milliseconds of lag. The lag is the amount of time a touch device takes to respond after you touch the device. The 100 milliseconds of lag may not be an issue when you are tapping and swapping the interface elements on a device, but they are apparent when you wing your finger around the screen faster. For example if you use any painting app, the lag is very obvious and screen responds slowly than an artist can paint with his finger. Researchers at Microsoft labs came out with a prototype of touch device that drastically cuts down the 100 milliseconds of lag time to just 1 millisecond. That’s 100 times faster than today’s touch screen devices. Check out the video embedded below for a demo of new touch screen. Over at TechCrunch, Chris Velazco says: The difference is staggering, especially when Dietz trots out the slow-motion footage. With the delay between touch input and screen response slashed by orders of magnitude, a device that sports the sort of super-low-latency Dietz envisions has the potential to feel far more (for lack of a better term) natural than its brethren. There’s zero delay when you slide a checker across a board, for example, and bringing that sort of instantaneous feedback to the many screens in our lives could help to bridge the gap between operating a bit of software and the feeling of interacting with objects.   It will be great boost to Microsoft’s tablet strategy if they succeed in bringing this research into mass market and allow it’s partners to use the technology on Windows 8 tablets.

    Read the article

  • Customer Engagement: Are Your Customers Engaged With Your Brands?

    - by Michael Snow
    Engaging Customers is Critical for Business Growth This week we'll be spending some time looking at Customer Engagement. We all have stories about how we try to engage our customers better than ever before.  We all know that successfully engaging customers is critical to an organization’s business success. We also know that engaging our customers is more challenging today than ever before. There is so much noise to compete with for getting anyone's attention. Over the last decade and a half we’ve watched as the online channel became a primary one for conducting our business and even managing our lives. And during this whole process or evolution, the customer journey has grown increasingly complex. Customers themselves have assumed increasing power and influence over the purchase process and for setting the tone and pace of the relationships they have with brands and you see the evidence of this in the really high expectations that customers have today. They expect brand experiences that are personalized and relevant -- In other words they want experiences that demonstrate that the brand understands their interests, preferences and past interactions with them. They also expect their experience with a brand and the community surrounding it to be social and interactive – it’s no longer acceptable to have a static, one-way dialogue with your customer base or to fail to connect your customers with fellow customers, or with your employees and partners. And on top of all this, customers expect us to deliver this rich and engaging, personalized and interactive experience, in a consistent way across a variety of channels including web, mobile and social channels or even offline venues such as in-store or via a call center. And as a result, we see that delivery on these expectations and successfully engaging your customers is a great challenge today. Customers expect a personal, engaging and consistent online customer experience. Today’s consumer expects to engage with your brand and the community surrounding it in an interactive and social way. Customers have come to expect a lot for the online customer experience.  ·        They expect it to be personal: o   Accessible:  - Regardless of my device  Via my existing online identities  o   Relevant:  Content that interests me  o   Customized:  To be able to tailor my online experience  ·        They expect it to be engaging: o   Social:  So I can share content with my social networks  o   Intuitive:  To easily find what I need   o   Interactive:  So I can interact with online communities And they expect it to be consistent across the online experience – so you better have your brand and information ducks in a row. These expectations are not only limited to your customers by any means. Your employees (and partners) are also expecting to be empowered with engagement tools across their internal and external communications and interactions with customers, partners and other employees. We had a great conversation with Ted Schadler from Forrester Research entitled: "Mobile is the New Face of Engagement" that is now available On-Demand. Take a look at all the webcasts available to watch from our Social Business Thought Leader Series. Social capabilities have become so pervasive and changed customers’ expectations for their online experiences. The days of one-direction communication with customers are at an end. Today’s customers expect to engage in a dialogue with your brand and the community surrounding it in an interactive and social way. You have at a very short window of opportunity to engage a customer before they go to another site in their pursuit of information, product, or services. In fact, customers who engage with brands via social media tend to spend more that customers who don’t, between 20% and 40% more.  And your customers are also increasingly influenced by their social networks too – 40% of consumers say they factor in Facebook recommendations when making purchasing decisions.  This means a few different things for today’s businesses. Incorporating forms of social interaction such as commenting or reviews as well as tightly integrating your online experience with your customers’ social networking experiences into the online customer experience are crucial for maintaining the eyeballs on your desired pages. --- Notes/Sources: 93% - Cone Finds that Americans Expect Companies to Have a Presence in Social Media - http://www.coneinc.com/content1182 40% of consumers factor in Facebook recommendations when making decisions about purchasing (Increasing Campaign Effectiveness with Social Media, Syncapse, March 2011) 20%-40% - Customers who engage with a company via social media spend this percentage more with that company than other customers (Source: Bain & Company Report – Putting Social Media to Work)

    Read the article

  • Attend MySQL Webinars This Week

    - by Bertrand Matthelié
    Interested in learning more about MySQL as embedded database? In building highly available MySQL applications with MySQL and DRBD? Join our webinars this week! All information below. Tuesday next week (November 20) we will provide an update about what's new in MySQL Enterprise Edition. We have live Q&A during the webinars so you'll get the chance to ask all your questions. Top 10 Reasons to Use MySQL as an Embedded Database Tuesday, November 13 9:00 a.m. PT Review the top 10 reasons why MySQL is technically well-suited for embedded use, as well as the related business reasons vendors choose MySQL initially, over time, and across product-lines. Register for the Webcast. MySQL High Availability with Distributed Replicated Block Device Thursday, November 15 9:00 a.m. PT Learn how to build highly available services with MySQL and distributed replicated block device (DRBD). The DRBD high-availability solution comprises a complete stack of open source software that delivers high-availability database clusters on commodity hardware, with the option of 24/7 support from Oracle. Register for the Webcast. Technology Update: What's New in MySQL Enterprise Edition Tuesday, November 20 9:00 a.m. PT Find out what's new in MySQL Enterprise Edition. Register for the Webcast.

    Read the article

  • How do internal smtp servers send mail to internal mail servers on the network?

    - by dmr83457
    We have upgraded our internal corporate email server and the IP address has changed internal to our network. A second email server is used for sending bulk jobs for a mailing list service that we offer. Since the switch of the internal corporate server IP we have been seeing problems when the bulk email server is trying to send email to our own domain. The log shows that it is still trying to hit the old corporate server instead of the new one. I have looked through all settings for the bulk email software and see nothing set there to send to internal mail servers, and I do not recall doing anything special to get this working when setup a couple years ago. Is there a setting in Win2003 or on the network that enables the mapping of MX record external IPs to internal IPs so mail gets routed correctly?

    Read the article

  • Screenshot before windows starts: without another computer?

    - by Nano8Blazex
    I'm pretty sure that this must have been asked before, but haven't found a duplicate question, thus I shall ask again... Is it possible to take a screenshot of my computer before the OS boots up WITHOUT an external computer or virtualization software like VMWare?? e.g. at the BIOS, or the "Windows is Loading" screen, even at the login screen? Is it possible to take screenshots of BSODs as well? EDIT: I'm using a desktop pc running Windows 7 Ultimate in a home environment. Thanks.

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • How to persist changes to instances in the cloud.

    - by Peter NUnn
    Hi folks, I must be missing something here, but can someone clue me in on how to persist changes (such as software installs etc) on machines in the cloud (either EC2 or my own Eucalyptus cloud). I have instances running.. can attach extra disks to them etc., but every time I terminate the instance, all of my changes are lost the next time I run them. Now, this sort of makes sense in that the instances are virtual, but, there must be some way to make these changes persist. I'm just missing how its done. Thanks. Peter.

    Read the article

  • Export a single layer as an image in Photoshop

    - by wrburgess
    I have a lot of designers send me layered PSDs of their designs and I need to break out the pieces of the designs to place on web pages. I can do a decent number of things in Photoshop, but I'm hardly efficient with it. My old way of just copying the image that's in a layer and pasting into a new image seems to take forever as I screw around with cropping and such. I've got Photoshop CS5, so I don't need external software to do anything, but I just need to figure out how to take a single layer, that may hold something small like an icon, and export it as a PNG or JPG. I am aware of the script called "Export Layers to Files" but it took about an hour and exported ALL of my layers to a huge number of files. I wasn't looking for a solution that broad. Is there an easy way to do this?

    Read the article

  • Linux to Solaris @ Morgan Stanley

    - by mgerdts
    I came across this blog entry and the accompanying presentation by Robert Milkoski about his experience switching from Linux to Oracle Solaris 11 for a distributed OpenAFS file serving environment at Morgan Stanley. If you are an IT manager, the presentation will show you: Running Solaris with a support contract can cost less than running Linux (even without a support contract) because of technical advantages of Solaris. IT departments can benefit from hiring computer scientists into Systems Programmer or similar roles.  Their computer science background should be nurtured so that they can continue to deliver value (savings and opportunity) to the business as technology advances. If you are a sysadmin, developer, or somewhere in between, the presentation will show you: A presentation that explains your technical analysis can be very influential. Learning and using the non-default options of an OS can make all the difference as to whether one OS is better suited than another.  For example, see the graphs on slides 3 - 5.  The ZFS default is to not use compression. When trying to convince those that hold the purse strings that your technical direction should be taken, the financial impact can be the part that closes the deal.  See slides 6, 9, and 10.  Sometimes reducing rack space requirements can be the biggest impact because it may stave off or completely eliminate the need for facilities growth. DTrace can be used to shine light on performance problems that may be suspected but not diagnosed.  It is quite likely that these problems have existed in OpenAFS for a decade or more.  DTrace made diagnosis possible. DTrace can be used to create performance analysis tools without modifying the source of software that is under analysis.  See slides 29 - 32. Microstate accounting, visible in the prstat output on slide 37 can be used to quickly draw focus to problem areas that affect CPU saturation.  Note that prstat without -m gives a time-decayed moving average that is not nearly as useful. Instruction level probes (slides 33 - 34) are a super-easy way to identify which part of a function is hot.

    Read the article

  • Can't access some websites with any browser

    - by Charles Kingsmill
    I'm running Windows 7 64-bit on a new Samsung laptop and accessing the internet okay via ethernet cable to my university's ISP. Some sites work fine (e.g. google.com) but I can't access others at all (microsoft.com, topshop.com). I can't connect to those sites in safe mode with networking. And ping and tracert both fail. There's no proxy. Other users can connect successfully to these sites using my cable and socket. I've tried all the following with no success: using various browsers (IE9, FF, Chrome) creating a new user updating drivers clearing the DNS cache using OpenDNS and Google's DNS turning off Avast tweaking the MTU running MS malicious software removal tool running Spybot S&D reviewing the hosts file disabling the IPv6 options repairing / resetting winsock settings disabling advanced javascript options I have run out of ideas... can anyone see anything I've missed??!

    Read the article

  • What is the ideal length of a method?

    - by iPhoneDeveloper
    In object-oriented programming, there is no exact rule on the maximum length of a method , but I still found these two qutes somewhat contradicting each other, so I would like to hear what you think. In Clean Code: A Handbook of Agile Software Craftsmanship, Robert Martin says: The first rule of functions is that they should be small. The second rule of functions is that they should be smaller than that. Functions should not be 100 lines long. Functions should hardly ever be 20 lines long. and he gives an example from Java code he sees from Kent Beck: Every function in his program was just two, or three, or four lines long. Each was transparently obvious. Each told a story. And each led you to the next in a compelling order. That’s how short your functions should be! This sounds great, but on the other hand, in Code Complete, Steve McConnell says something very different: The routine should be allowed to grow organically up to 100-200 lines, decades of evidence say that routines of such length no more error prone then shorter routines. And he gives a reference to a study that says routines 65 lines or long are cheaper to develop. So while there are diverging opinions about the matter, is there a functional best-practice towards determining the ideal length of a method for you?

    Read the article

  • How do I structure code and builds for continuous delivery of multiple applications in a small team?

    - by kingdango
    Background: 3-5 developers supporting (and building new) internal applications for a non-software company. We use TFS although I don't think that matters much for my question. I want to be able to develop a deployment pipeline and adopt continuous integration / deployment techniques. Here's what our source tree looks like right now. We use a single TFS Team Project. $/MAIN/src/ $/MAIN/src/ApplicationA/VSSOlution.sln $/MAIN/src/ApplicationA/ApplicationAProject1.csproj $/MAIN/src/ApplicationA/ApplicationAProject2.csproj $/MAIN/src/ApplicationB/... $/MAIN/src/ApplicationC $/MAIN/src/SharedInfrastructureA $/MAIN/src/SharedInfrastructureB My Goal (a pretty typical promotion pipeline) When a code change is made to a given application I want to be able to build that application and auto-deploy that change to a DEV server. I may also need to build dependencies on Shared Infrastructure Components. I often also have some database scripts or changes as well If developer testing passes I want to have an manually triggered but automated deploy of that build on a STAGING server where end-users will review new functionality. Once it's approved by end users I want to a manually triggered auto-deploy to production Question: How can I best adopt continuous deployment techniques in a multi-application environment? A lot of the advice I see is more single-application-specific, how is that best applied to multiple applications? For step 1, do I simply setup a separate Team Build for each application? What's the best approach to accomplishing steps 2 and 3 of promoting latest build to new environments? I've seen this work well with web apps but what about database changes

    Read the article

  • Windows Server 2003 Router with PortForwarding

    - by jM2.me
    Hello, I am owning a small company and we have purchased a server to setup few server applications on it as well as other software. We would like to setup our network in following way Internet<-WindowsServer2003 as router<-Switch<-Office Computers Server has two nic interfaces and we have 24ports 1GB network switch connected to one nic and internet connection to another nic. Our ISP is Frontier and we have Fios 25/25. We get network cable out of ONT box directly connected to our server. There are no modems/routers. Setting up DHCP on windows 2003 is easy job but we would like to have the ability to port forward some ports from office computers. I have some knowledge in networking but not as much. How could I setup FHCP server on win2k3 with the ability to port forward some ports to office computers? Thank you for your time

    Read the article

  • Oracle SPARC SuperCluster and US DoD Security guidelines

    - by user12611852
    I've worked in the past to help our government customers understand how best to secure Solaris.  For my customer base that means complying with Security Technical Implementation Guides (STIGs) from the Defense Information Systems Agency (DISA).  I recently worked with a team to apply both the Solaris and Oracle 11gR2 database STIGs to a SPARC SuperCluster.  The results have been published in an Oracle White paper. The SPARC SuperCluster is a highly available, high performance platform that incorporates: SPARC T4-4 servers Exadata Storage Servers and software ZFS Storage appliance InfiniBand interconnect Flash Cache  Oracle Solaris 11 Oracle VM for SPARC Oracle Database 11gR2 It is targeted towards large, mission critical database, middleware and general purpose workloads.  Using the Oracle Solution Center we configured a SSC applied DoD security guidance and confirmed functionality and performance of the system.  The white paper reviews our findings and includes a number of security recommendations.  In addition, customers can contact me for the itemized spreadsheets with our detailed STIG reports. Some notes: There is no DISA STIG  documentation for Solaris 11.  Oracle is working to help DISA create one using their new process. As a result, our report follows the Solaris 10 STIG document and applies it to Solaris 11 where applicable. In my conversations over the years with DISA Field Security Office they have repeatedly told me, "The absence of a DISA written STIG should not prevent a product from being used.  Customer may apply vendor or industry security recommendations to receive accreditation." Thanks to the core team: Kevin Rohan, Gary Jensen and Rich Qualls as well as the staff of the Oracle Solution Center and Glenn Brunette for their help in creating the document.

    Read the article

  • Tool to check if XML is valid in my VS2012 comments

    - by davidjr
    I am writing the documentation for our companies software developed with vs2012. I need to add xml examples to the summary of each class, due to xml instantiation of objects. We are using sandcastle to create the documentation (company choice), and I want to be able to review my xml comments without building the help file every time. Is there an application that anyone would recommend where I can view how the xml renders before I build the help file? Here is my example: /// <summary> /// Performs DFT on a data array, writes output in a CSV file. /// </summary> /// <example> /// <para>XML declaration</para> /// <code lang="xml" xml:space="preserve"> /// %lt;DataProvider name="DftDP" description="Computes DFT" etc... I want to check the XML to make sure it is valid, maybe by copy and pasting it into a tool of some sort?

    Read the article

  • Friday Fun: Building Blasters 2

    - by Mysticgeek
    After dealing with unnecessary spreadsheets and TPS reports all week, it’s time to waste time playing a flash game. Today we take a look at Building Blasters 2 where you strategically place explosives to bring down structures. Building Blasters 2 You need to place explosives carefully to clear areas in the red level, keep bystanders safe, and manage your budget. After placing the explosives on the structure, you can set the amount of time that passes before they blow. This comes in handy when you reach advanced levels. When you’re ready to start the demolition click on the Detonate button and watch the buildings fall. If you don’t achieve the objectives, you will get the Demolition Error screen and can replay the level. After you’ve received enough money, you’ll get a message between missions telling you there is enough money to buy items in the shop. You can get enhanced destructive devices such as nitroglycerin, a wrecking ball, call in an air strike and more… If you’re sick of the pointy haired boss dragging you down all week, pretend the structures are the office building and destroy away. Building Blasters 2 is a great way to have fun and let off steam so you can enjoy your weekend. Play Building Blasters 2 For additional fun games to play, make sure and check out the How-To Geek Arcade. Similar Articles Productive Geek Tips Friday Fun: Demolition CityFriday Fun: Cargo BridgeFriday Fun: Portal, the Flash VersionFriday Fun: VehiclesFriday Fun: Play Bubble Quod TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 10 Superb Firefox Wallpapers OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides

    Read the article

  • How safe is locking the screen?

    - by D Connors
    So, both windows and linux have a pretty useful feature that allows you to leave everything running on the PC while also keeping invaders away by locking the screen. My question is: Say I leave my laptop with the screen locked while I go get a donnut, and then it gets stolen. Assuming the thief has access to whatever software he needs, how easy/hard would it be for him to access my (currently logged-in) account? Now let me be clear. I'm not asking if he can access the data on the harddrive. I know he can, and that issue would go under data encryption, which is not my question here. I'm focusing on how hard would it be to get around the "Insert Password" screen, and have full access to my account. I'm looking for answers regarding both OS's; but, if needed, assume Ubuntu. Thank you.

    Read the article

< Previous Page | 626 627 628 629 630 631 632 633 634 635 636 637  | Next Page >