Search Results

Search found 25727 results on 1030 pages for 'solution'.

Page 1/1030 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Boss solution vs Developer solution

    - by mahen23
    The problem: When we were sending newsletters to customers, there was no way to confirm if the customer already received the mail. So the boss decided to implement this idea: Boss's Idea: Each time mail was being sent, do an INSERT in a db with the title of the newsletter being sent and the email address which is receving the email address. To ensure that any email address does not receive the same email twice, do a SELECT in the table and find the title of the newsletter being sent: if (title of newsletter is found) { check to see of the email we are sending mail to is already present. if it does, do not send mail } else { send mail } MY idea: create a column called unique and mark it as UNIQUE. Each time mail was being sent, concatenate email + newsletter id and record it in the UNIQUE row. The next time we do a "mysql_affected_rows" check to see if our INSERT was successful, we send the mail, else, there is already a duplicate and no need to send it.

    Read the article

  • Ghost Solution Suite: Booting over PXE to WinPE for re-imaging, then booting to installed OS

    - by uberdanzik
    I have 40 networked computers that need to be re-imaged each night over a network via an automatic and unattended process. This is to reset the computers to a default state, as well as update the computers to the latest software loads. I'm using Symantec Ghost Solution Suite 2.5. My process so far is the following: Client begins in a powered down WakeOnLan accepting state. Ghost Console task uses WakeOnLan and PXE to boot the client into the WinPE environment. The client connects to the ghost console and reimages itself successfully. The client closes WinPE and restarts. PROBLEM: The client boots into the WinPE environment again, instead of the newly installed OS (Win7) I need it to boot into Win7 once so that I can run a few configuration batch files, then shut down into the WakeOnLan state again. Ghost console even reports an error on the process, that it never rebooted into the OS. Right now it seems that there is not an option to stop it from booting into the PXE server's WinPE image after re-imaging. Even if I set up a PXE boot menu with other boot options, its pointless, because it will always boot the default option. I would expect the ghost console task to be able to influence the PXE boot choice somehow. What do they expect us to do, turn the PXE server on and off manually? How can I get the client to boot to the OS after re-imaging? Thank you.

    Read the article

  • SQL SERVER – Solution of Puzzle – Swap Value of Column Without Case Statement

    - by pinaldave
    Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. I will be not able to cover every single solution which is posted as a comment, however, I would like to for sure cover few interesting entries. However, I am selecting 5 solutions which are different (not necessary they are most optimal or best – just different and interesting). Just for clarity I am involving the original problem statement here. USE tempdb GO CREATE TABLE SimpleTable (ID INT, Gender VARCHAR(10)) GO INSERT INTO SimpleTable (ID, Gender) SELECT 1, 'female' UNION ALL SELECT 2, 'male' UNION ALL SELECT 3, 'male' GO SELECT * FROM SimpleTable GO -- Insert Your Solutions here -- Swap value of Column Gender SELECT * FROM SimpleTable GO DROP TABLE SimpleTable GO Here are the five most interesting and different solutions I have received. Solution by Roji P Thomas UPDATE S SET S.Gender = D.Gender FROM SimpleTable S INNER JOIN SimpleTable D ON S.Gender != D.Gender I really loved the solutions as it is very simple and drives the point home – elegant and will work pretty much for any values (not necessarily restricted by the option in original question ‘male’ or ‘female’). Solution by Aneel CREATE TABLE #temp(id INT, datacolumn CHAR(4)) INSERT INTO #temp VALUES(1,'gent'),(2,'lady'),(3,'lady') DECLARE @value1 CHAR(4), @value2 CHAR(4) SET @value1 = 'lady' SET @value2 = 'gent' UPDATE #temp SET datacolumn = REPLACE(@value1 + @value2,datacolumn,'') Aneel has very interesting solution where he combined both the values and replace the original value. I personally liked this creativity of the solution. Solution by SIJIN KUMAR V P UPDATE SimpleTable SET Gender = RIGHT(('fe'+Gender), DIFFERENCE((Gender),SOUNDEX(Gender))*2) Sijin has amazed me with Difference and Soundex function. I have never visualized that above two functions can resolve the problem. Hats off to you Sijin. Solution by Nikhildas UPDATE St SET St.Gender = t.Gender FROM SimpleTable St CROSS Apply (SELECT DISTINCT gender FROM SimpleTable WHERE St.Gender != Gender) t I was expecting that someone will come up with this solution where they use CROSS APPLY. This is indeed very neat and for sure interesting exercise. If you do not know how CROSS APPLY works this is the time to learn. Solution by mistermagooo UPDATE SimpleTable SET Gender=X.NewGender FROM (VALUES('male','female'),('female','male')) AS X(OldGender,NewGender) WHERE SimpleTable.Gender=X.OldGender As per author this is a slow solution but I love how syntaxes are placed and used here. I love how he used syntax here. I will say this is the most beautifully written solution (not necessarily it is best). Bonus: Solution by Madhivanan Somehow I was confident Madhi – SQL Server MVP will come up with something which I will be compelled to read. He has written a complete blog post on this subject and I encourage all of you to go ahead and read it. Now personally I wanted to list every single comment here. There are some so good that I am just amazed with the creativity. I will write a part of this blog post in future. However, here is the challenge for you. Challenge: Go over 50+ various solutions listed to the simple problem here. Here are my two asks for you. 1) Pick your best solution and list here in the comment. This exercise will for sure teach us one or two things. 2) Write your own solution which is yet not covered already listed 50 solutions. I am confident that there is no end to creativity. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • how to add "Existing solution folder recursively" to my VS2005 solution

    - by user36753
    I tried drag and drop from the explorer, but no luck with following error: "Folders cannot be dropped or pasted as solution items. Choose an individual document instead." I know we can create each folders/subfolders manually and add each file, but any quick way to do this on visual studio 2005? Updated: Thank you for the reply, but I do not want the folders to be added under any project, It should appear as a separate node inside my solution, like any other project. In this case the show all files does not work, since the solution itself does not have any folder, it is only if we select any project it works. I know we can create each folders/subfolders manually and add each file, but any quick way, because there are few hundreds of files.

    Read the article

  • VSS - Solution file between multiple users

    - by BhejaFry
    Hi folks, we have a solution with multiple projects that is being developed by a team of developers. Project paths in the solution file checked in initially contains the path that are specific to that developer. Now when another dev gets latest of the solution, some of the projects won't load as the path differs. What's a better way to manage this ? TIA

    Read the article

  • Shared storage solution for our sql server backups

    - by Gokhan
    We have 3 clustered sql servers. We have 5+ multi terrabyte databases and their backup files (compressed using quest litespeed) are hitting over 600gb each, We are required to keep at least a week or two weeks (if we can) of weekly full backups and then 6 days differential backups, and a week or 2 weeks worth of log backups local. We are currently limited to 2TB volumes from our san team, we can have multiple volumes but they are expensive ($200 per raw TB per month) and having to deal with many backup volumes instead of a single big volume is difficult. I think if we could have a shared network storage of 20TB+ raid 10 or so for all our servers for keeping the backups and another department will copy them to tape from the network storage and delete files according to the retention period would be good, if this box would be a build in operating system (even unix a complete file storage system) that would be good. What do you guys think, does this make sense to you, is there any manufacturer that sells a storage product like that which that work in a clustered environment? Thank you

    Read the article

  • Web Site in solution where "Rebuild Solution" compile succeeds cannot launch debugger

    - by fordareh
    I have a solution that includes a Web Site (created using the web site template not the web app project template - converting isn't an option, btw). When I rebuild all, the compile succeeds, but strangely displays 3 errors, all of which are "Could not get dependencies for project reference 'PROJNAME'". When I try to launch the debugger, I get the "There were build errors." dialogue. Two questions: If I choose the 'Yes' option in the debug error dialogue to run the last successful build, will it run on the code that my Rebuild All just compiled? How do I resolve this issue? I checked this post and am disheartened by my prospects. What is strange, though, is that I added these same projects to a separate web site solution that compiled/debugged fine, removed the test web site and re-added the target website I would like to debug, and it failed in the same manner. Is there a secret web site .proj file for .NET web sites? http://stackoverflow.com/questions/863379/could-not-get-dependencies-for-project-reference

    Read the article

  • Is any solution the correct solution?

    - by Eli
    I always think to myself after solving a programming challenge that I have been tied up with for some time, "It works, thats good enough". I don't think this is really the correct mindset, in my opinion and I think I should always be trying to code with the greatest performance. Anyway, with this said, I just tried a ProjectEuler question. Specifically question #2. How could I have improved this solution. I feel like its really verbose. Like I'm passing the previous number in recursion. <?php /* Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be: 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... Find the sum of all the even-valued terms in the sequence which do not exceed four million. */ function fibonacci ( $number, $previous = 1 ) { global $answer; $fibonacci = $number + $previous; if($fibonacci > 4000000) return; if($fibonacci % 2 == 0) { $answer = is_numeric($answer) ? $answer + $fibonacci : $fibonacci; } return fibonacci($fibonacci, $number); } fibonacci(1); echo $answer; ?> Note this isn't homework. I left school hundreds of years ago. I am just feeling bored and going through the Project Euler questions

    Read the article

  • MySQL INJECTION Solution...

    - by Val
    I have been bothered for so long by the MySQL injections and was thinking of a way to eliminate this problem all together. I have came up with something below hope that many people will find this useful. The only Draw back I can think of this is the partial search: Jo =returns "John" by using the like %% statement. Here is a php solution: <?php function safeQ(){ $search= array('delete','select');//and every keyword... $replace= array(base64_encode('delete'),base64_encode('select')); foreach($_REQUEST as $k=>$v){ str_replace($search, $replace, $v); } } foo(); function html($str){ $search= array(base64_encode('delete'),base64_encode('select')); $replace= array('delete','select');//and every keyword... str_replace($search, $replace, $str); } //example 1 ... ... $result = mysql_fetch_array($query); echo html($result[0]['field_name']); //example 2 $select = 'SELECT * FROM safeQ($_GET['query']) '; //example 3 $insert = 'INSERT INTO .... value(safeQ($_GET['query']))'; ?> I know, I know that you still could inject using 1=1 or any other type of injections... but this I think could solve half of your problem so the right mysql query is executed. So my question is if anyone can find any draw backs on this then please feel free to comment here. PLEASE GIVE AN ANSWER only if you think that this is a very useful solution and no major drawbacks are found OR you think is a bad idea all together...

    Read the article

  • Solution Factory for Visual Studio 2010

    - by Mendy
    I love the idea behind Solution Factory project. But, unfortunately this project have a few bugs. Is anyone using it successfully with visual studio 2010? Is there any other better option for the same task? (of creating a new project based on existed one).

    Read the article

  • Is there a suggested solution structure for ASP.NET MVC Production Apps

    - by Eoin Campbell
    In general, I don't like to keep code (BaseClasses or DataAccess Code) in the App_Code directory of an ASP.NET Site. I'll usually pull this stuff out into a MySite.BusinessLogic & MySite.DataAccess DLL's respectively. I'm wondering should I be doing the same for ASP.NET MVC. Would it be better to Organise the solution something along the lines of MySite.Common - DLL - (Basic Functionality built on .NET System Dlls) MySite.DAL - DLL - (DataAccessLayer & DBML Files) MySite.Models - DLL - (MVC Models e.g. Repository Classes) MySite.Controllers - DLL (MVC Controllers which use Models) MySite - ASP.NET MVC Site. Or am I missing something... presumably, I'll lose some of the nice (Add View, Go To Controller, context menu items that have been added)

    Read the article

  • New Visual Studio 2010 Extension - Collapse Solution

    - by MikeParks
    If your team has recently upgraded to Visual Studio 2010, take a second to check out the new Extension Manager. You can use it to browse through or install tons of tools, controls, or templates from the Visual Studio Gallery. My friend, Cory Cissell, and I recently teamed up and created an extension of our own called "Collapse Solution". It adds an option called Collapse Solution to the context menu of the solution node in the solution explorer. It also adds an option called Collapse Project to the context menu of each project node in the solution explorer. When that option is clicked, it will walk through the solution explorer tree and collapse any expanded child nodes in that section (projects, folders, code behind files, designer files, etc.). I use to have an add-in that did this in Visual Studio 2008 but it wasn't compatible when we upgraded to 2010 so we decided to write our own. The old tool was also packaged with a bunch of other junk that we didn't need so we figured it would be a much cleaner tool if it was broken off into its own extension. There's no need to install extra tools if you don't really need them. So if you have upgraded to Visual Studio 2010, please feel free to try out our Collapse Solution extension and leave us a rating/review in the Visual Studio Gallery. Thanks! Here's the link: http://visualstudiogallery.msdn.microsoft.com/en-us/2d81fec6-71f3-4fa5-87b4-c2aa18e42f92

    Read the article

  • Temporary "Backup" of SharePoint Content During Feature and Solution Deployment

    - by ccomet
    I need to decide on a method for storing a subset of the content in a SharePoint site, so that when I delete and recreate certain lists as part of a feature activation, I can re-insert all of this content back where it should belong. I have an idea myself, but I don't know if it's the only method and more importantly, the right method. My client has me creating a SharePoint system for them to communicate with their clients. The business process has maybe 5 stages in it (maybe it's more, I don't even know because they don't tell me everything), and the current system I've written over the past months is maybe 2 stages through. This meets our deadline of completing those systems by Monday next week... but at that point my client is planning on making the site live from that point. In effect, their work with their clients will be running parallel with my work for them. As I complete my own work on a separate test server, I'll push each following stage of the process onto the live server. Scheduled downtimes during non-business times (like a weekend) will be available for me to perform these pushes. Keeping pace so that my development is faster than the actual business process is my own problem and off-topic... so let's get back to the problem I stated at the start of this post. In this system, we have sets of features which will create lists for their associated content types and field types when activated, and delete these lists when the feature is deactivated. Most updates don't need to deactivate and reactivate these features, such as workflow changes, custom actions, custom forms, and similar ilk. But there are some parts which do require this. On my test server, it's okay for me to obliterate lists, but once the site is live and there's real correspondence data, it's absolutely unacceptable to do this. So when I need to implement a new change in functionality, I need to be able to store the currently present data in several lists, deactivate the feature, reactivate the feature, and restore all of this data. Perhaps I have hoist myself by my own petard with the feature system I implemented. Unfortunately, the necessity to later on make several of these "project sites" meant I had to do a lot of my code with the concept of "Can be deployed repeatedly" in mind. My current plan is to run through lists and libraries which will be affected by the particular feature that is to be reset. Files and all of their versions will be saved in a directory on the server. Then, a set of text files will be used to store all of the important field values for the items. This includes a lot of cross-list reference lookups that will need to be maintained, but that's simple enough. Then, I deactivate the feature, deploy the new solution, and reactivate the feature. We upload all of the files in the order specified by their versions and update them with the stored fields for those versions, so that we retain the version structure. As each one is first uploaded, the new ID is picked out, and all relevant lookups in the rest of the files are updated (in some manner that I make sure I don't re-update it later with an incorrect value, of course). After that, we run through all the rest of the items in the order most conducive to keeping the relational data correct. This roughly summarizes what my current plan is. To my advantage, there are no long running workflows in the system that will be affected by this, so there's nothing I will have to worry about making sure nothing is "still running" when I do this stuff. I don't really know all the cons of this approach... I can imagine they're quite hefty. But I'm unsure what other choices I even have, and my searches haven't turned up anything. Is there anyone who can think of a better idea? Or will anyone just tell me that I really have no other choice? Thanks in advance!

    Read the article

  • BonitaSoft sort Bonita Open Solution 5.6 et propose de tester gratuitement sa solution de gestion des processus métier open-source

    BonitaSoft sort Bonita Open Solution 5.6 Et propose de tester gratuitement sa solution de gestion des processus métier open-source BonitaSoft, un des leaders de la gestion des processus métier (BPM) open source, a annoncé la sortie de Bonita Open Solution 5.6. Cette nouvelle version intègre des mises à niveau importantes proposées au sein d'une nouvelle gamme de solutions BPM pour « maximiser la productivité, accélérer la mise en production d'applications basées sur des processus métier, et de sécuriser les déploiements critiques ». La suite Bonita Open Solution est conçue pour répondre aux besoins évolutifs de projets BPM qui requièrent davantage de collaboration entre les utilisa...

    Read the article

  • Problem Solving vs. Solution Finding

    - by ryanabr
    By enlarge, most developers fall into these two camps I will try to explain what I mean by way of example. A manager gives the developer a task that is communicated like this: “Figure out why control A is not loading on this form”. Now, right there it could be argued that the manager should probably have given better direction and said something more like: “Control A is not loading on the Form, fix it”. They might sound like the same thing to most people, but the first statement will have the developer problem solving the reason why it is failing. The second statement should have the developer looking for the solution to make it work, not focus on why it is broken. In the end, they might be the same thing, but I usually see the first approach take way longer than the second approach. The Problem Solver: The problem solver’s approach to fixing something that is broken is likely to take the error or behavior that is being observed and start to research it using a tool like Google, or any other search engine. 7/10 times this will yield results for the most common of issues. The challenge is in the other 30% of issues that will take the problem solver down the rabbit hole and cause them not to surface for days on end while every avenue is explored for the cause of the problem. In the end, they will probably find the cause of the issue and resolve it, but the cost can be days, or weeks of work. The Solution Finder: The solution finder’s approach to a problem will begin the same way the Problem Solver’s approach will. The difference comes in the more difficult cases. Rather than stick to the pure “This has to work so I am going to work with it until it does” approach, the Solution Finder will look for other ways to get the requirements satisfied that may or may not be using the original approach. For example. there are two area of an application of externally equivalent features, meaning that from a user’s perspective, the behavior is the same. So, say that for whatever reason, area A is now not working, but area B is working. The Problem Solver will dig in to see why area A is broken, where the Solution Finder will investigate to see what is the difference between the two areas and solve the problem by potentially working around it. The other notable difference between the two types of developers described is what point they reach before they re-emerge from their task. The problem solver will likely emerge with a triumphant “I have found the problem” where as the Solution Finder will emerge with the more useful “I have the solution”. Conclusion At the end of the day, users are what drives features in software development. With out users there is no need for software. In todays world of software development with so many tools to use, and generally tight schedules I believe that a work around to a problem that takes 8 hours vs. the more pure solution to the problem that takes 40 hours is a more fruitful approach.

    Read the article

  • Building vs. Buying a Master Data Management Solution

    - by david.butler(at)oracle.com
    Many organizations prefer to build their own MDM solutions. The argument is that they know their data quality issues and their data better than anyone. Plus a focused solution will cost less in the long run then a vendor supplied general purpose product. This is not unreasonable if you think of MDM as a point solution for a particular data quality problem. But this approach carries significant risk. We now know that organizations achieve significant competitive advantages when they deploy MDM as a strategic enterprise wide solution: with the most common best practice being to deploy a tactical MDM solution and grow it into a full information architecture. A build your own approach most certainly will not scale to a larger architecture unless it is done correctly with the larger solution in mind. It is possible to build a home grown point MDM solution in such a way that it will dovetail into broader MDM architectures. A very good place to start is to use the same basic technologies that Oracle uses to build its own MDM solutions. Start with the Oracle 11g database to create a flexible, extensible and open data model to hold the master data and all needed attributes. The Oracle database is the most flexible, highly available and scalable database system on the market. With its Real Application Clusters (RAC) it can even support the mixed OLTP and BI workloads that represent typical MDM data access profiles. Use Oracle Data Integration (ODI) for batch data movement between applications, MDM data stores, and the BI layer. Use Oracle Golden Gate for more real-time data movement. Use Oracle's SOA Suite for application integration with its: BPEL Process Manager to orchestrate MDM connections to business processes; Identity Management for managing users; WS Manager for managing web services; Business Intelligence Enterprise Edition for analytics; and JDeveloper for creating or extending the MDM management application. Oracle utilizes these technologies to build its MDM Hubs.  Customers who build their own MDM solution using these components will easily migrate to Oracle provided MDM solutions when the home grown solution runs out of gas. But, even with a full stack of open flexible MDM technologies, creating a robust MDM application can be a daunting task. For example, a basic MDM solution will need: a set of data access methods that support master data as a service as well as direct real time access as well as batch loads and extracts; a data migration service for initial loads and periodic updates; a metadata management capability for items such as business entity matrixed relationships and hierarchies; a source system management capability to fully cross-reference business objects and to satisfy seemingly conflicting data ownership requirements; a data quality function that can find and eliminate duplicate data while insuring correct data attribute survivorship; a set of data quality functions that can manage structured and unstructured data; a data quality interface to assist with preventing new errors from entering the system even when data entry is outside the MDM application itself; a continuing data cleansing function to keep the data up to date; an internal triggering mechanism to create and deploy change information to all connected systems; a comprehensive role based data security system to control and monitor data access, update rights, and maintain change history; a flexible business rules engine for managing master data processes such as privacy and data movement; a user interface to support casual users and data stewards; a business intelligence structure to support profiling, compliance, and business performance indicators; and an analytical foundation for directly analyzing master data. Oracle's pre-built MDM Hub solutions are full-featured 3-tier Internet applications designed to participate in the full Oracle technology stack or to run independently in other open IT SOA environments. Building MDM solutions from scratch can take years. Oracle's pre-built MDM solutions can bring quality data to the enterprise in a matter of months. But if you must build, at lease build with the world's best technology stack in a way that simplifies the eventual upgrade to Oracle MDM and to the full enterprise wide information architecture that it enables.

    Read the article

  • DNASTREAM’s RapidLaunch Oracle Accelerate solution for RightNow

    - by Richard Lefebvre
    The Oracle RightNow Accelerate solution from DNASTREAM allows each Customer to enjoy quicker deployment and earlier time to benefits from this SAAS Customer Experience solution. At the start of the project, a full suite of E-Learning simulations & materials is provided by DNASTREAM to match the customer’s processes. This RapidLaunch content library for RightNow can be leveraged by our customers early in their project implementations bringing significant cost efficiencies, time reduction and improved user adoption to their project roll outs. Solution Profile: This Oracle Accelerate solution is based on Oracle RightNow CX that includes Content management, Contact management, Incident management, Customer Portal, Closed incident Survey, Standard reports. As an additional option there is available the Oracle RightNow CX Chat implementation. For more information about RightNow and the DNASTREAM Accelerate solution, visit the Oracle Accelerate microsite or contact www.dnastream.com

    Read the article

  • Emailing Interviewer after interview regarding technical solution

    - by Raghav Shankar
    I had an interview yesterday where I was given a programming problem and I was asked to figure the optimal solution for it. I gave a solution that worked in linear time, but used 2 loops (not inner loops). At the end of the interview, the interviewer saw I was interested in solving the problem, so he said the optimal solution uses only one loop and has linear complexity and at the end of the interview I had asked for his card and he gave one to me. I think I might have figured out a solution and I was wondering if it's alright to email the recruiter thanking him for his time and also mentioning about the solution I had figured out?

    Read the article

  • Looking for Non Hosted Audio & Video Podcasting Solution for Church Websites

    - by motboys
    I am looking for a solution that will do the following: User uploads audio and/or video files with title, desc. image etc Solution embeds info into ID3 tags Solution generates RSS feed Solution embeds new content in our website Content on website is searchable This is for a couple of church websites I manage. I am looking for the ability to do the above with a sermon mp3 and also a video. At the moment we are doing it with multiple steps / people involved and I want to automate the process. I can't seem to find a solution that does all of the above. Thank you!

    Read the article

  • C# Solution - How many projects?

    - by Oskar Kjellin
    Hey, I googled this a little but couldn't find a good result. Right now I'm building a web site and I'm trying to make it as correct as possible from a design point of view from the beginning. The problem I'm now facing is that when deciding to start with logging I needed a project to place this code in. As I could not find a suitable place in my currect projects I thought: hey, why not a logging class library? Is there a general guideline on how many projects you should have? I know this would be a rather small project but it would be nice to entirely get it out of my way! Any hints are appreciated :)

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >