Search Results

Search found 21063 results on 843 pages for 'stochastic process'.

Page 438/843 | < Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >

  • Folder redirection save times are awful slow

    - by wbmeu
    I recently set up folder redirection for Documents on Server 2008, but it's painfully slow at the moment. My users are all using Visual Studio 2010, and a save takes 20-30 seconds (whereas it used to take 2 seconds locally). I understand this is because they are being saved to the server, and that takes time (though I did think it would be faster over a gigabit link, with servers on the same network). I enabled offline files on the share, set the option to All files or folders, and enabled Optimize for performance. I thought that this would pull all the files down locally (which I think it did), allow local editing of said files, synchronizing them quietly in the background from time to time (which it does not do - saves right to the share). Is there any way I can speed this process up a bit? Any other tweaks I can do?

    Read the article

  • Custom built accounts package running on windows vista , printer selection box not now appearing

    - by liam hester
    This package has a printer function for documents which always brought up the printer selection box from which one picked the appropiate printer. Now recently,almost certainly as a result of windows updates, when I select PRINT nothing happens. The printer is working fine, and indeed a 'direct print' option to the default printer which is a matrix printer works fine, as it does not invoke the printer selection process. Does anyone know if there is a particular setting which might be causing this to happen. It works fine still on all previous windows versions XP etc, but will not work on Vista or later versions. Or is there a program code fix that anyone could suggest. Many thanks Liam

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 31 (sys.dm_server_services)

    - by Tamarick Hill
    The last DMV for this month long blog session is the sys.dm_server_services DMV. This DMV returns information about your SQL Server, Full-Text, and SQL Server Agent related services. To further illustrate the information this DMV contains, lets run it against our Training instance that we have been using for this blog series. SELECT * FROM sys.dm_server_services The first column returned by this DMV is the actual Service Name. The next columns are the startup_type and startup_type_desc columns which display your chosen method for how a particular method should be started. The next columns status and status_desc display the current status for each of your Services on the instance. The process_id column represents the server process id. The last_startup_time column gives you the last time that a particular service was started. The service_account column provides you with the name of the account that is used to control the service. The filename column gives you the full path to the executable for the service. Lastly we have the is_clustered column and the cluster_nodename which indicates whether or not a particular service is clustered and is part of a resource cluster group, and if so, the cluster node that the service is installed on. This is a good DMV to provide you with a quick snapshot view of the current SQL Server services you have on your instance. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/hh204542.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • What are the pros (and cons) of using “Sign in with Twitter/Facebook” for a new website?

    - by Paul D. Waite
    Myself and a friend are looking to launch a little forum site. I’m considering using the “Sign in with Facebook/Twitter” APIs, possibly exclusively (a la e.g. Lanyrd), for user login. I haven’t used either of these before, nor run a site with user logins at all. What are the pros (and cons) of these APIs? Specifically: What benefits do I get as a developer from using them? What drawbacks are there? Do end users actually like/dislike them? Have you experienced any technical/logistical issues with these APIs specifically? Here are the pros and cons I’ve got so far: Pros More convenient for the user (“register” with two clicks, sign in with one) Possibly no need to maintain our own login system  Cons No control over our login process Exclude Facebook/Twitter users who are worried about us having some sort of access to their accounts Users’ accounts on our site are compromised if their Facebook/Twitter accounts are compromised. And if we don’t maintain our own alternative login system: Dependency on Facebook/Twitter for our login system Exclude non-Facebook/non-Twitter users from our site

    Read the article

  • Are project managers useful in Scrum?

    - by Martin Wickman
    There are three roles defined in Scrum: Team, Product Owner and Scrum Master. There is no project manager, instead the project manager job is spread across the three roles. For instance: The Scrum Master: Responsible for the process. Removes impediments. The Product Owner: Manages and prioritizes the list of work to be done to maximize ROI. Represents all interested parties (customers, stakeholders). The Team: Self manage its work by estimating and distributing it among themselves. Responsible for meeting their own commitments. So in Scrum, there is no longer a single person responsible for project success. There is no command-and-control structure in place. That seems to baffle a lot of people, specifically those not used to agile methods, and of course, PM's. I'm really interested in this and what your experiences are, as I think this is one of the things that can make or break a Scrum implementation. Do you agree with Scrum that a project manager is not needed? Do you think such a role is still required? Why?

    Read the article

  • Simple SQL Server 2005 Replication - "D-1" server used for heavy queries/reports

    - by Ricardo Pardini
    Hello. We have two SQL 2005 machines. One is used for production data, and the other is used for running queries/reports. Every night, the production machine dumps (backups) it's database to disk, and the other one restores it. This is called the D-1 process. I think there must be a more efficient way of doing this, since SQL 2005 has many forms of replication. Some requirements: 1) No need for instant replication, there can be (some) delay 2) All changes (including schemas, data, constraints, indexes) need to be replicated without manual intervention 3) It is used for a single database only 4) There is a third server available if needed 5) There is high bandwidth (gigabit ethernet) available between the servers 6) There isn't a shared storage (SAN) available What would be a good alternative to this daily backup/restore routine? Thanks!

    Read the article

  • How can I create (or do I even need to create) an alias of a DNS MX record?

    - by AKWF
    I am in the process of moving my DNS records from Network Solutions to the Amazon Route 53 service. While I know and understand a little about the basic kinds of records, I am stumped on how to create the record that will point to the MX record on Network Solutions (if I'm even saying that right). On Network Solutions I have this: Mail Servers (MX Records) Note: Mail Servers are listed in rank order myapp.net Add Sub-Domain MXMailServer(Preference) TTL inbound.myapp.net.netsolmail.net.(10) 7200 Network Solutions E-mail I have read that the payload for an MX record state that it must point to an existing A record in the DNS. Yet in the example above, that inbound.myapp... record only has the words "Network Solutions E-mail" next to it. Our email is hosted at Network Solutions. I have already created the CNAME records that look like this: mail.myapp.net 7200 mail.mycarparts.net.netsolmail.net. smtp.myapp.net 7100 smtp.mycarparts.net.netsolmail.net. Since I am only using Amazon as the DNS, do I even need to do anything with that MX record? I appreciate your help, I googled and researched this before I posted, this is my first post on webmasters although I've been on SO for a few years.

    Read the article

  • Reset Mac OS X (Snow Leopard) File Permissions -- All Files

    - by Frank
    Is their a script or process completely reset all file system file permissions to factory default? (Less restoring from a image backup or reinstalling the OS). This would include I've affected all files from / to Applications and home folder and all contents. (Everything) I've tried to use the Disk Utility's First Aid 'Repair Disk Permissions' but it didn't seem to touch or affect everything - some but not all. I've ran it twice so far... I've seen this but it's not quite the something. Fixing mac user file permissions, not the system The reason for all of this is I accidentally ran a chmod on all files (as sudo). Working too fast, now I'm in a hole.

    Read the article

  • ODI 11g - Scripting a Reverse Engineer

    - by David Allan
    A common question is related to how to script the reverse engineer using the ODI SDK. This follows on from some of my posts on scripting in general and accelerated model and topology setup. Check out this viewlet here to see how to define a reverse engineering process using ODI's package. Using the ODI SDK, you can script this up using the OdiPackage and StepOdiCommand classes as follows;  OdiPackage pkg = new OdiPackage(folder, "Pkg_Rev"+modName);   StepOdiCommand step1 = new StepOdiCommand(pkg,"step1_cmd_reset");   step1.setCommandExpression(new Expression("OdiReverseResetTable \"-MODEL="+mod.getModelId()+"\"",null, Expression.SqlGroupType.NONE));   StepOdiCommand step2 = new StepOdiCommand(pkg,"step2_cmd_reset");   step2.setCommandExpression(new Expression("OdiReverseGetMetaData \"-MODEL="+mod.getModelId()+"\"",null, Expression.SqlGroupType.NONE));   StepOdiCommand step3 = new StepOdiCommand(pkg,"step3_cmd_reset");   step3.setCommandExpression(new Expression("OdiReverseSetMetaData \"-MODEL="+mod.getModelId()+"\"",null, Expression.SqlGroupType.NONE));   pkg.setFirstStep(step1);   step1.setNextStepAfterSuccess(step2);   step2.setNextStepAfterSuccess(step3); The biggest leap of faith for users is getting to know which SDK classes have to be used to build the objects in the design, using StepOdiCommand isn't necessarily obvious, once you see it in action though it is very simple to use. The above snippet uses an OdiModel variable named mod, its a snippet I added to the accelerated model creation script in the post linked above.

    Read the article

  • Consolidate SQL Server Reporting Services

    - by Eric C. Singer
    I've been a big fan of consolidating as many DB's to a few SQL servers for a while and I've had great success with it. However, I've never had to deal with SQL reporting services. Has anyone migrated SSRS from a bunch of random SQL servers into a consolidated SQL server? I don't exactly know a whole lot about SSRS which is part of the problem. To my knowlege, it's one DB per SSRS instance, so it sounds like i'd need to find a way of exporting data and merging it. Basically the process used to look like: Move DB from SQL Express to shared SQL server Change point in APP to point at new SQL server With reporting services, how do I move the reporting service compenent of the DB as well? I realize I may need to tweak the app, but my question is on the SQL side.

    Read the article

  • Installing SharePoint 2010 and PowerPivot for SharePoint on Windows 7

    - by smisner
    Many people like me want (or need) to do their business intelligence development work on a laptop. As someone who frequently speaks at various events or teaches classes on all subjects related to the Microsoft business intelligence stack, I need a way to run multiple server products on my laptop with reasonable performance. Once upon a time, that requirement meant only that I had to load the current version of SQL Server and the client tools of choice. In today's post, I'll review my latest experience with trying to make the newly released Microsoft BI products work with a Windows 7 operating system. The entrance of Microsoft Office SharePoint Server 2007 into the BI stack complicated matters and I started using Virtual Server to establish a "suitable" environment. As part of the team that delivered a lot of education as part of the Yukon pre-launch activities (that would be SQL Server 2005 for the uninitiated), I was working with four - yes, four - virtual servers. That was a pretty brutal workload for a 2GB laptop, which worked if I was very, very careful. It could also be a finicky and unreliable configuration as I learned to my dismay at one TechEd session several years ago when I had to reboot a very carefully cached set of servers just minutes before my session started. Although it worked, it came back to life very, very slowly much to the displeasure of the audience. They couldn't possibly have been less pleased than me. At that moment, I resolved to get the beefiest environment I could afford and consolidate to a single virtual server. Enter the 4GB 64-bit laptop to preserve my sanity and my livelihood. Likewise, for SQL Server 2008, I managed to keep everything within a single virtual server and I could function reasonably well with this approach. Now we have SQL Server 2008 R2 plus Office SharePoint Server 2010. That means a 64-bit operating system. Period. That means no more Virtual Server. That means I must use Hyper-V or another alternative. I've heard alternatives exist, but my few dabbles in this area did not yield positive results. It might have been just me having issues rather than any failure of those technologies to adequately support the requirements. My first run at working with the new BI stack configuration was to set up a 64-bit 4GB laptop with a dual-boot to run Windows Server 2008 R2 with Hyper-V. However, I was generally not happy with running Windows Server 2008 R2 on my laptop. For one, I couldn't put it into sleep mode, which is helpful if I want to prepare for a presentation beforehand and then walk to the podium without the need to hold my laptop in its open state along the way (my strategy at the TechEd session long, long ago). Secondly, it was finicky with projectors. I had issues from time to time and while I always eventually got it to work, I didn't appreciate those nerve-wracking moments wondering whether this would be the time that it wouldn't work. Somewhere along the way, I learned that it was possible to load SharePoint 2010 in a Windows 7 which piqued my interest. I had just acquired a new laptop running Windows 7 64-bit, and thought surely running the BI stack natively on my laptop must be better than running Hyper-V. (I have not tried booting to Hyper-V VHD yet, but that's on my list of things to try so the jury of one is still out on this approach.) Recently, I had to build up a server with the RTM versions of SQL Server 2008 R2 and Sharepoint Server 2010 and decided to follow suit on my Windows 7 Ultimate 64-bit laptop. The process is slightly different, but I'm happy to report that it IS possible, although I had some fits and starts along the way. DISCLAIMER: These products are NOT intended to be run in production mode on the Windows 7 operating system. The configuration described in this post is strictly for development or learning purposes and not supported by Microsoft. If you have trouble, you will NOT get help from them. I might be able to help, but I provide no guarantees of my ability or availablity to help. I won't provide the step-by-step instructions in this post as there are other resources that provide these details, but I will provide an overview of my approach, point you to the relevant resources, describe some of the problems I encountered, and explain how I addressed those problems to achieve my desired goal. Because my goal was not simply to set up SharePoint Server 2010 on my laptop, but specifically PowerPivot for SharePoint, I started out by referring to the installation instructions at the PowerPiovt-Info site, but mainly to confirm that I was performing steps in the proper sequence. I didn't perform the steps in Part 1 because those steps are applicable only to a server operating system which I am not running on my laptop. Then, the instructions in Part 2, won't work exactly as written for the same reason. Instead, I followed the instructions on MSDN, Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008. In general, I found the following differences in installation steps from the steps at PowerPivot-Info: You must copy the SharePoint installation media to the local drive so that you can edit the config.xml to allow installation on a Windows client. You also have to manually install the prerequisites. The instructions provides links to each item that you must manually install and provides a command-line instruction to execute which enables required Windows features. I will digress for a moment to save you some grief in the sequence of steps to perform. I discovered later that a missing step in the MSDN instructions is to install the November CTP Reporting Services add-in for SharePoint. When I went to test my SharePoint site (I believe I tested after I had a successful PowerPivot installation), I ran into the following error: Could not load file or assembly 'RSSharePointSoapProxy, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I was rather surprised that Reporting Services was required. Then I found an article by Alan le Marquand, Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010,that instructed readers to install the November add-in. My first reaction was, "Really?!?" But I confirmed it in another TechNet article on hardware and software requirements for SharePoint Server 2010. It doesn't refer explicitly to the November CTP but following the link took me there. (Interestingly, I retested today and there's no longer any reference to the November CTP. Here's the link to download the latest and greatest Reporting Services Add-in for SharePoint Technologies 2010.) You don't need to download the add-in anymore if you're doing a regular server-based installation of SharePoint because it installs as part of the prerequisites automatically. When it was time to start the installation of SharePoint, I deviated from the MSDN instructions and from the PowerPivot-Info instructions: On the Choose the installation you want page of the installation wizard, I chose Server Farm. On the Server Type page, I chose Complete. At the end of the installation, I did not run the configuration wizard. Returning to the PowerPivot-Info instructions, I tried to follow the instructions in Part 3 which describe installing SQL Server 2008 R2 with the PowerPivot option. These instructions tell you to choose the New Server option on the Setup Role page where you add PowerPivot for SharePoint. However, I ran into problems with this approach and got installation errors at the end. It wasn't until much later as I was investigating an error that I encountered Dave Wickert's post that installing PowerPivot for SharePoint on Windows 7 is unsupported. Uh oh. But he did want to hear about it if anyone succeeded, so I decided to take the plunge. Perseverance paid off, and I can happily inform Dave that it does work so far. I haven't tested absolutely everything with PowerPivot for SharePoint but have successfully deployed a workbook and viewed the PowerPivot Management Dashboard. I have not yet tested the data refresh feature, but I have installed. Continue reading to see how I accomplished my objective. I unintalled SQL Server 2008 R2 and started again. I had different problems which I don't recollect now. However, I uninstalled again and approached installation from a different angle and my next attempt succeeded. The downside of this approach is that you must do all of the things yourself that are done automatically when you install PowerPivot as a new server. Here are the steps that I followed: Install SQL Server 2008 R2 to get a database engine instance installed. Run the SharePoint configuration wizard to set up the SharePoint databases. In Central Administration, create a Web application using classic mode authentication as per a TechNet article on PowerPivot Authentication and Authorization. Then I followed the steps I found at How to: Install PowerPivot for SharePoint on an Existing SharePoint Server. Especially important to note - you must launch setup by using Run as administrator. I did not have to manually deploy the PowerPivot solution as the instructions specify, but it's good to know about this step because it tells you where to look in Central Administration to confirm a successful deployment. I did spot some incorrect steps in the instructions (at the time of this writing) in How To: Configure Stored Credentials for PowerPivot Data Refresh. Specifically, in the section entitled Step 1: Create a target application and set the credentials, both steps 10 and 12 are incorrect. They tell you to provide an actual Windows user name and password on the page where you are simply defining the prompts for your application in the Secure Store Service. To add the Windows user name and password that you want to associate with the application - after you have successfully created the target application - you select the target application and then click Set credentials in the ribbon. Lastly, I followed the instructions at How to: Install Office Data Connectivity Components on a PowerPivot server. However, I have yet to test this in my current environment. I did have several stops and starts throughout this process and edited those out to spare you from reading non-essential information. I believe the explanation I have provided here accurately reflect the steps I followed to produce a working configuration. If you follow these steps and get a different result, please let me know so that together we can work through the issue and correct these instructions. I'm sure there are many other folks in the Microsoft BI community that will appreciate the ability to set up the BI stack in a Windows 7 environment for development or learning purposes. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Unable to connect to wireless internet(wifi) through KDE plasma desktop

    - by Mohammed Arafat Kamaal
    I installed the KDE plasma desktop through Ubuntu software center. I am on Ubuntu Lucid Lynx. After the install, Im unable to connect to my wifi connection in the KDE session. But I can connect to my wifi perfectly through GNOME session. I've tried a lot without much success. Also KDE doesn't store my password correctly and keeps prompting for authorization again and again. Some of the things that I noticed. Network is detected, Network name and strength is also displayed. Other characteristics also appear properly. When the credentials are supplied, it accepts them and continually displays the message "Setting network address". However this process never succeeds. At this stage the password is repeatedly asked many times but the connection is never established. Some of the other things that I did, I have also tried other things like restarting my modem and the computer. That didn't work. I tried to restart nm-applet and KNetworkManager. That didn't work either. ifconfig display all my interfaces and Mac addresses correctly. Since its working fine GNOME the drivers are fine. This is sure a KDE specific issue. Other threads related to this on the interwebs don't offer much information either. Please share a solution for this.

    Read the article

  • MySQL Database Replication and Server Load

    - by Willy
    Hi Everyone, I have an online service with around 5000 MySQL databases. Now, I am interested in building a development area of the exactly same environment in my office, therefore, I am about to setup MySQL replication between my live MySQL server and development MySQL server. But my concern is the load which will occur on my live MySQL server once replication is started. Do you have any experience? Will this process cause extra load on my production server? Thanks, have a nice weekend.

    Read the article

  • IT Optimization Plan Pays Off For UK Retailer

    - by Brian Dayton
    I caught this article in ComputerworldUK yesterday. The headline talks about UK-based supermarket chain Morrisons is increasing their IT spend...OK, sounds good. Even nicer that Oracle is a big part of that. But what caught my eye were three things: 1) Morrison's truly has a long term strategy for IT. In this case, modernizing and optimizing how they use IT for business advantage.   2) Even in a tough economic climate, Morrison's views IT investments as contributing to and improving the bottom line. Specifically, "The investment in IT contributed to a 21 percent increase in Morrison's underlying profit.."   3) The phased, 3-year "Optimization Plan" took a holistic approach to their business--from CRM and Supply Chain systems to the underlying application infrastructure. On the infrastructure front, adopting a more flexible Service-Oriented Architecture enabled them to be more agile and adapt their business and Identity Management helped with sometimes mundane (but costly) issues like lost passwords and being able to document who has access to what.   Things don't always turn out so rosy. And I know it was a long and difficult process...but it's nice to see a happy ending every once in a while.  

    Read the article

  • XML Schema For MBSA Reports

    - by Steve Hawkins
    I'm in the process of creating a script to run the command line version of Microsoft Baseline Security Analyzer (mbsacli.exe) against all of our servers. Since the MBSA reports are provided as XML documents, I should be able to write a script or small program to parse the XML looking for errors / issues. I'm wondering if anyone knows whether or not the XML schema for the MBSA reports is documented anywhere -- I have goggled this, and cant seem to find any trace of it. I've run across a few articles that address bits and pieces, but nothing that addresses the complete schema. Yes, I could just reverse engineer the XML, but I would like to understand a little more about the meaning of some of the tags. Thanks...

    Read the article

  • Installing Win8 from ISO image error

    - by eco_bach
    I installed a new SSD in my PC laptop which came with an OEM version of Windows 8. After trying unsuccesfully to get the OEM version of Windows onto the SSD drive I gave up and purchased and downloaded a NEW copy of Windows 8. After that I created an ISO image on my desktop machine and followed the instructions here to create a bootable USB flash drive. However, after going through the boot install process with the USB flash drive on my laptop I get the following message 'The product key entered does not match any of the Windows images available for installation. Enter a different product key.' But I wasn't asked to enter any key! Is this because the ISO image was created on a different machine? I really don't get why this should be so difficult.

    Read the article

  • Do Repeat Yourself in Unit Tests

    - by João Angelo
    Don’t get me wrong I’m a big supporter of the DRY (Don’t Repeat Yourself) Principle except however when it comes to unit tests. Why? Well, in my opinion a unit test should be a self-contained group of actions with the intent to test a very specific piece of code and should not depend on externals shared with other unit tests. In a typical unit test we can divide its code in two major groups: Preparation of preconditions for the code under test; Invocation of the code under test. It’s in the first group that you are tempted to refactor common code in several unit tests into helper methods that can then be called in each one of them. Another way to not duplicate code is to use the built-in infrastructure of some unit test frameworks such as SetUp/TearDown methods that automatically run before and after each unit test. I must admit that in the past I was guilty of both charges but what at first seemed a good idea since I was removing code duplication turnout to offer no added value and even complicate the process when a given test fails. We love unit tests because of their rapid feedback when something goes wrong. However, this feedback requires most of the times reading the code for the failed test. Given this, what do you prefer? To read a single method or wander through several methods like SetUp/TearDown and private common methods. I say it again, do repeat yourself in unit tests. It may feel wrong at first but I bet you won’t regret it later.

    Read the article

  • Errors reported by "powercfg -energy"

    - by Tim
    Running "powercfg -energy" under Windows 7 command line, I received a report with following three errors: System Availability Requests:Away Mode Request The program has made a request to enable Away Mode. Requesting Process \Device\HarddiskVolume2\Program Files\Windows Media Player\wmpnetwk.exe CPU Utilization:Processor utilization is high The average processor utilization during the trace was high. The system will consume less power when the average processor utilization is very low. Review processor utilization for individual processes to determine which applications and services contribute the most to total processor utilization. Average Utilization (%) 49.25 Platform Power Management Capabilities:PCI Express Active-State Power Management (ASPM) Disabled PCI Express Active-State Power Management (ASPM) has been disabled due to a known incompatibility with the hardware in this computer. I was wondering for the first error, what does "enable away mode" mean? for the second, what utilization percentage of CPU is reasonable? for the third, what is "PCI Express Active-State Power Management (ASPM)"? How I can correct the three errors? Thanks and regards!

    Read the article

  • Fix Icon Display Problems by Rebuilding the Windows 7 Thumbnail Cache

    - by The Geek
    Have you ever been browsing through photos or videos on your PC, and noticed that the thumbnails weren’t showing up properly? Sometimes they get corrupted, and you can quickly rebuild them to fix the problem. Just for some background, let’s walk through what we’re talking about. Normally, when you’re browsing around your files, you’ll see thumbnails for pictures and videos that you are viewing. These thumbnails are all generated, and stored in a cache to make browsing files faster. But sometimes… the cache gets corrupted, and we need to rebuild them. Here’s an example of what happens when it goes out of whack… Rebuilding the Windows 7 Thumbnail Cache All you have to do is open up Disk Cleanup through the start menu search box (just type in disk cleanup to find it) Just make sure that Thumbnails is checked, and then click OK to run through the cleaning process. Similar Articles Productive Geek Tips Prevent Windows XP from Creating the Thumbs.db Thumbnail Cache FilesFixing When All Thumbnail Icons Show the Same or Wrong ImageGet Vista Taskbar Thumbnail Previews in Windows XPIncrease the size of Taskbar Preview Thumbnails in Windows 7Change SuperFetch to Only Cache System Boot Files in Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams

    Read the article

  • How to learn & introduce scrum in small startup?

    - by Jens Bannmann
    In a few months, a friend will establish his startup software company, and I will be the software architect with one additional developer. Though we have no real day-to-day experience with agile methods, I have read much "overview" type of material on them, and I firmly believe they are a good - if not the only - way to build software. So with this company, I want to go for iterative, agile development from day 1, preferably something light-weight. I was thinking of Scrum, but the question is: what is the best way for me and my colleagues to learn about it, to introduce it (which techniques when etc) and to evaluate whether we should keep it? Background which might be relevant: we're all experienced developers around the same age with similar professional mindset. We have worked together in the past and afterwards at several different companies, mostly with a Java/.NET focus. Some are a bit familiar with general ideas from the agile movement. In this startup, I have great power over tools, methods and process. The startup's product will be developed from scratch and could be classified as middleware. We have some "customer" contacts in the industry who could provide input as soon as we get to an alpha stage.

    Read the article

  • Handling range in CNAME

    - by Imran
    We have different set of CNAMEs pointing to different subdomains. These subdomains (a.domain.com, b.domain.com) are pointing to different IPs on different machines. # Server A a1.domain.com pointing to a.domain.com a2.domain.com pointing to a.domain.com .. aN.domain.com pointing to a.domain.com # Server B b1.domain.com pointing to b.domain.com b2.domain.com pointing to b.domain.com .. bN.domain.com pointing to b.domain.com Currently, we have to add individual CNAME entries (eg. a1... aN) against a single subdomain (a.dominan.com). We repeat the above process for every new server which is actually another subdomain (e.g. c.domain.com). Is there a way we can specify a range of CNAMEs (e.g. [a1..a25].domain.com point to a.domain.com) instead of adding separate CNAME etnries? Is there any possibility to handle this at DNS or webserver (apache or Nginx) level?

    Read the article

  • IBM Tivoli Network Manager IP Edition - Job does not run

    - by Thorsten Niehues
    Since our network discovery takes too long I tried to split the biggest job into two parts. The two parts use the same Perl script but have a different scope. I copied a Job (Agent) doing the following: Copied the .agnt file Copied the associated perl script The problem is that one or the other job (changes randomly) does not run. The Disco Process will fail eventually. In the log of the job which does not run I see the following error message: Wed Jul 18 08:48:54 2012 Warning: Failed to send on transport layer found in file CRivObjSockClient.cc at line 1293 - Client My_MacTable_Cis is not connected to service Helper How do I fix this problem?

    Read the article

  • xcopy file, suppress &ldquo;Does xxx specify a file name&hellip;&rdquo; message

    - by MarkPearl
    Today we had an interesting problem with file copying. We wanted to use xcopy to copy a file from one location to another and rename the copied file but do this impersonating another user. Getting the impersonation to work was fairly simple, however we then had the challenge of getting xcopy to work. The problem was that xcopy kept prompting us with a prompt similar to the following… Does file.xxx specify a file name or directory name on the target (F = file, D = directory)? At which point we needed to press ‘Y’. This seems to be a fairly common challenge with xcopy, as illustrated by the following stack overflow link… One of the solutions was to do the following… echo f | xcopy /f /y srcfile destfile This is fine if you are running from the command prompt, but if you are triggering this from c# how could we daisy chain a bunch of commands…. The solution was fairly simple, we eventually ended up with the following method… public void Copy(string initialFile, string targetFile) { string xcopyExe = @"C:\windows\system32\xcopy.exe"; string cmdExe = @"C:\windows\system32\cmd.exe"; ProcessStartInfo p = new ProcessStartInfo(); p.FileName = cmdExe; p.Arguments = string.Format(@"/c echo f | {2} {0} {1} /Y", initialFile, targetFile, xcopyExe); Process.Start(p); } Where we wrapped the commands we wanted to chain as arguments and instead of calling xcopy directly, we called cmd.exe passing xcopy as an argument.

    Read the article

  • Upcoming Webcast: Use Visual Decision Making To Boost the Pace of Product Innovation – October 24, 2013

    - by Gerald Fauteux
    See More, Do More Use Visual Decision Making To Boost the Pace of Product Innovation   Join a Free Webcast hosted by Oracle, featuring QUALCOMM Click here to register for this webcast   Keeping innovation ahead of shrinking product lifecycles continues to be a challenge in today’s fast-paced business environment, but new visualization techniques in the product design and development process are helping businesses widen the gap further.  Innovative visualization methods, including Augmented Business Visualization, can be powerful differentiators for business leaders, especially when it comes to accelerating product cycles.   Don’t miss this opportunity to discover how visualization tied to PLM can help empower visual decision making and enhance productivity across your organization.  See more and do more with the power of Oracle. Join solution experts from Oracle and special guest, Ravi Sankaran, Sr. Staff Systems Analyst, QUALCOMM to discuss how visual decision making can help efficiently ramp innovation efforts throughout the product lifecycle: Advance collaboration with universal access across all document types with robust security measures in place Synthesize product information quickly like cost, quality, compliance, etc. in a highly visual form from multiple sources in a single visual and actionable environment Increase productivity by rendering documents in the appropriate context of specific business processes Drive modern business transformation with new collaboration methods such as Augmented Business Visualization . Date: Thursday, October 24, 2013 Time: 10:00 a.m. PDT / 1:00 p.m. EDT Click here to register for this FREE event

    Read the article

  • Problems starting MySQL on Mac OS X

    - by Jon
    I am not able to start MySQL server on Mac OS X 10.4.11. MySQL was installed using Macports. MySQL was running fine until it suddenly died without any obvious reason. When running "mysql", I get the error message: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/opt/local/var/run/mysql5/mysqld.sock' (2) If I try to start MySQL manually, I get the following error message: sudo /opt/local/share/mysql5/mysql/mysql.server start Starting MySQL/opt/local/share/mysql5/mysql/mysql.server: line 159: kill: (636) - No such process ERROR! In /etc/mysql/my.cnf I have: socket = __PREFIX/var/run/mysqld/mysqld.sock But the path "opt/local/var/run/mysqld/" does not exist on my system. I tried to change the socket path to "__PREFIX/var/run/mysql5/ mysqld.sock" (which is where the socket is located). Unfortunately, this did not help either. Owner and Permissions for /opt/local/var/run/mysql5/ are correctly set. Any suggestions on how to start MySQL again? Thanks for your advice.

    Read the article

< Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >