Search Results

Search found 27047 results on 1082 pages for 'multiple projects'.

Page 366/1082 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • C# Login Dialog Implementation

    - by arc1880
    I have implemented a LoginAccess class that prompts the user to enter their active directory username and password. Then I save the login data as an encrypted file. Every subsequent start of the application, the LoginAccess class will read the encrypted file and check against the active directory to see if the login information is still valid. If it is not, then it will prompt the user again. I have made it so that the reading of the encrypted file and displaying of the login dialog is done on a separate thread. A delegate is fired when the login process is complete. The issue that I'm having is that I have a class that is used in multiple places. This class contains the call to the LoginAccess object. Every time I instantiate a new object there are multiple calls to the LoginAccess object and I get multiple dialogs appearing when it tries to prompt for a username and password. Any suggestions on how to have only one dialog appear would be greatly appreciated.

    Read the article

  • Real World Nuget

    - by JoshReuben
    Why Nuget A higher level of granularity for managing references When you have solutions of many projects that depend on solutions of many projects etc à escape from Solution Hell. Links · Using A GUI (Package Explorer) to build packages - http://docs.nuget.org/docs/creating-packages/using-a-gui-to-build-packages · Creating a Nuspec File - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic2.aspx · consuming a Nuget Package - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic3 · Nuspec reference - http://docs.nuget.org/docs/reference/nuspec-reference · updating packages - http://nuget.codeplex.com/wikipage?title=Updating%20All%20Packages · versioning - http://docs.nuget.org/docs/reference/versioning POC Folder Structure POC Setup Steps · Install package explorer · Source o Create a source solution – configure output directory for projects (Project > Properties > Build > Output Path) · Package o Add assemblies to package from output directory (D&D)- add net folder o File > Export – save .nuspec files and lib contents <?xml version="1.0" encoding="utf-16"?> <package > <metadata> <id>MyPackage</id> <version>1.0.0.3</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <summary /> </metadata> </package> o File > Save – saves .nupkg file · Create Target Solution o In Tools > Options: Configure package source & Add package Select projects: Output from package manager (powershell console) ------- Installing...MyPackage 1.0.0 ------- Added file 'NugetSource.AssemblyA.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyA.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'MyPackage.1.0.0.nupkg' to folder 'MyPackage.1.0.0'. Successfully installed 'MyPackage 1.0.0'. Added reference 'NugetSource.AssemblyA' to project 'AssemblyX' Added reference 'NugetSource.AssemblyB' to project 'AssemblyX' Added file 'packages.config'. Added file 'packages.config' to project 'AssemblyX' Added file 'repositories.config'. Successfully added 'MyPackage 1.0.0' to AssemblyX. ============================== o Packages folder created at solution level o Packages.config file generated in each project: <?xml version="1.0" encoding="utf-8"?> <packages>   <package id="MyPackage" version="1.0.0" targetFramework="net40" /> </packages> A local Packages folder is created for package versions installed: Each folder contains the downloaded .nupkg file and its unpacked contents – eg of dlls that the project references Note: this folder is not checked in UpdatePackages o Configure Package Manager to automatically check for updates o Browse packages - It automatically picked up the updates Update Procedure · Modify source · Change source version in assembly info · Build source · Open last package in package explorer · Increment package version number and re-add assemblies · Save package with new version number and export its definition · In target solution – Tools > Manage Nuget Packages – click on All to trigger refresh , then click on recent packages to see updates · If problematic, delete packages folder Versioning uninstall-package mypackage install-package mypackage –version 1.0.0.3 uninstall-package mypackage install-package mypackage –version 1.0.0.4 Dependencies · <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd"> <metadata> <id>MyDependentPackage</id> <version>1.0.0</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <dependencies> <group targetFramework=".NETFramework4.0"> <dependency id="MyPackage" version="1.0.0.4" /> </group> </dependencies> </metadata> </package> Using NuGet without committing packages to source control http://docs.nuget.org/docs/workflows/using-nuget-without-committing-packages Right click on the Solution node in Solution Explorer and select Enable NuGet Package Restore. — Recall that packages folder is not part of solution If you get downloading package ‘Nuget.build’ failed, config proxy to support certificate for https://nuget.org/api/v2/ & allow unrestricted access to packages.nuget.org To test connectivity: get-package –listavailable To test Nuget Package Restore – delete packages folder and open vs as admin. In nugget msbuild: <Import Project="$(SolutionDir)\.nuget\nuget.targets" /> TFSBuild Integration Modify Nuget.Targets file <RestorePackages Condition="  '$(RestorePackages)' == '' "> True </RestorePackages> … <PackageSource Include="\\IL-CV-004-W7D\Packages" /> Add System Environment variable EnableNuGetPackageRestore=true & restart the “visual studio team foundation build service host” service. Important: Ensure Network Service has access to Packages folder Nugetter TFS Build integration Add Nugetter build process templates to TFS source control For Build Controller - Specify location of custom assemblies Generate .nuspec file from Package Explorer: File > Export Edit the file elements – remove path info from src and target attributes <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">     <metadata>         <id>Common</id>         <version>1.0.0</version>         <title />         <authors>josh-r</authors>         <owners />         <requireLicenseAcceptance>false</requireLicenseAcceptance>         <description>My package description.</description>         <dependencies>             <group targetFramework=".NETFramework3.5" />         </dependencies>     </metadata>     <files>         <file src="CommonTypes.dll" target="CommonTypes.dll" />         <file src="CommonTypes.pdb" target="CommonTypes.pdb" /> … Add .nuspec file to solution so that it is available for build: Dev\NovaNuget\Common\NuSpec\common.1.0.0.nuspec Add a Build Process Definition based on the Nugetter build process template: Configure the build process – specify: · .sln to build · Base path (output directory) · Nuget.exe file path · .nuspec file path Copy DLLs to a binary folder 1) Set copy local for an assembly reference to false 2)  MSBuild Copy Task – modify .csproj file: http://msdn.microsoft.com/en-us/library/3e54c37h.aspx <ItemGroup>     <MySourceFiles Include="$(MSBuildProjectDirectory)\..\SourceAssemblies\**\*.*" />   </ItemGroup>     <Target Name="BeforeBuild">     <Copy SourceFiles="@(MySourceFiles)" DestinationFolder="bin\debug\SourceAssemblies" />   </Target> 3) Set Probing assembly search path from app.config - http://msdn.microsoft.com/en-us/library/823z9h8w(v=vs.80).aspx -                 <?xml version="1.0" encoding="utf-8" ?> <configuration>   <runtime>     <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">       <probing privatePath="SourceAssemblies"/>     </assemblyBinding>   </runtime> </configuration> Forcing 'copy local = false' The following generic powershell script was added to the packages install.ps1: param($installPath, $toolsPath, $package, $project) if( $project.Object.Project.Name -ne "CopyPackages") { $asms = $package.AssemblyReferences | %{$_.Name} foreach ($reference in $project.Object.References) { if ($asms -contains $reference.Name + ".dll") { $reference.CopyLocal = $false; } } } An empty project named "CopyPackages" was added to the solution - it references all the packages and is the only one set to CopyLocal="true". No MSBuild knowledge required.

    Read the article

  • Thread safety of Matlab engine API

    - by Jeremy
    I have discovered through trial and error that the MATLAB engine function is not completely thread safe. Does anyone know the rules? Discovered through trial and error: On Windows, the connection to MATLAB is via COM, so the COM Apartment threading rules apply. All calls must occur in the same thread, but multiple connections can occur in multiple threads as long as each connection is isolated. From the answers below, it seems that this is not the case on UNIX, where calls can be made from multiple threads as long as the calls are made serially.

    Read the article

  • setting up/installing/configuring nginx LEMP stack on fresh VPS server

    - by grant tailor
    I need some help in settingup/installing and configuring nginx LEMP stack on a fresh new VPS i have. The specs of the CentOS 5.7 VPS are 2GB DDR3 ECC RAM(4GB burst), 1 core 1.5Ghz(3Ghz burst) and 100GB RAID 10 storage, unmetered bandwidth @ 100Mpbs all for a whopping $25/month(unbeatable, yeah i know :) Anyways i have followed this LEMP(will also need MySQL and PHP) stack guide on linode http://library.linode.com/lemp-guides/centos-5 but basically what i want is to be able to host multiple website on this webserver after everything is setup. I am used to using DirectAdmin control panel on other server and want to have things setup so i can host multiple websites...mostly wordpress and drupal themes. Lets say 10 websites on this nginx web server. So can someone please help me on what i need to do to take "full" advantage of nginx power and performance, while been able to easily manage these multiple websites (wordpress and drupal themes)? Thanks.

    Read the article

  • Silverlight Line Graph with Gradient

    - by gav
    I have a series of points which I will turn into a line on a graph. What I want is to give the area under the graph a gradient fill. It would look somewhat similar to a Bloomberg graph like this; My question really has three parts; First, how should I fill only the area under the graph? Second, how do I fill that with a gradient? Finally, if I have multiple lines on the same graph any area under more than one line should have a greyscale gradient fill, how would you set this up? My biggest problem is deciding on the data structures to use, I could use many multiple sided shapes (One for each line/ data series) and then tell the brush to draw; Transparent if it's not in any shape The colour of one series if it's in one shape (Alpha relative to height to give grad) Black if it's in multiple shapes (Alpha relative to height to give grad) Then I'd draw the shapes' boundaries in white afterwards. Thanks, Gav

    Read the article

  • Book Review: Brownfield Application Development in .NET

    - by DotNetBlues
    I recently finished reading the book Brownfield Application Development in .NET by Kyle Baley and Donald Belcham.  The book is available from Manning.  First off, let me say that I'm a huge fan of Manning as a publisher.  I've found their books to be top-quality, over all.  As a Kindle owner, I also appreciate getting an ebook copy along with the dead tree copy.  I find ebooks to be much more convenient to read, but hard-copies are easier to reference. The book covers, surprisingly enough, working with brownfield applications.  Which is well and good, if that term has meaning to you.  It didn't for me.  Without retreading a chunk of the first chapter, the authors break code bases into three broad categories: greenfield, brownfield, and legacy.  Greenfield is, essentially, new development that hasn't had time to rust and is (hopefully) being approached with some discipline.  Legacy applications are those that are more or less stable and functional, that do not expect to see a lot of work done to them, and are more likely to be replaced than reworked. Brownfield code is the gray (brown?) area between the two and the authors argue, quite effectively, that it is the most likely state for an application to be in.  Brownfield code has, in some way, been allowed to tarnish around the edges and can be difficult to work with.  Although I hadn't realized it, most of the code I've worked on has been brownfield.  Sometimes, there's talk of scrapping and starting over.  Sometimes, the team dismisses increased discipline as ivory tower nonsense.  And, sometimes, I've been the ignorant culprit vexing my future self. The book is broken into two major sections, plus an introduction chapter and an appendix.  The first section covers what the authors refer to as "The Ecosystem" which consists of version control, build and integration, testing, metrics, and defect management.  The second section is on actually writing code for brownfield applications and discusses object-oriented principles, architecture, external dependencies, and, of course, how to deal with these when coming into an existing code base. The ecosystem section is just shy of 140 pages long and brings some real meat to the matter.  The focus on "pain points" immediately sets the tone as problem-solution, rather than academic.  The authors also approach some of the topics from a different angle than some essays I've read on similar topics.  For example, the chapter on automated testing is on just that -- automated testing.  It's all well and good to criticize a project as conflating integration tests with unit tests, but it really doesn't make anyone's life better.  The discussion on testing is more focused on the "right" level of testing for existing projects.  Sometimes, an integration test is the best you can do without gutting a section of functional code.  Even if you can sell other developers and/or management on doing so, it doesn't actually provide benefit to your customers to rewrite code that works.  This isn't to say the authors encourage sloppy coding.  Far from it.  Just that they point out the wisdom of ignoring the sleeping bear until after you deal with the snarling wolf. The other sections take a similarly real-world, workable approach to the pain points they address.  As the section moves from technical solutions like version control and continuous integration (CI) to the softer, process issues of metrics and defect tracking, the authors begin to gently suggest moving toward a zero defect count.  While that really sounds like an unreasonable goal for a lot of ongoing projects, it's quite apparent that the authors have first-hand experience with taming some gruesome projects.  The suggestions are grounded and workable, and the difficulty of some situations is explicitly acknowledged. I have to admit that I started getting bored by the end of the ecosystem section.  No matter how valuable I think a good project manager or business analyst is to a successful ALM, at the end of the day, I'm a gear-head.  Also, while I agreed with a lot of the ecosystem ideas, in theory, I didn't necessarily feel that a lot of the single-developer projects that I'm often involved in really needed that level of rigor.  It's only after reading the sidebars and commentary in the coding section that I had the context for the arguments made in favor of a strong ecosystem supporting the development process.  That isn't to say that I didn't support good product management -- indeed, I've probably pushed too hard, on occasion, for a strong ALM outside of just development.  This book gave me deeper insight into why some corners shouldn't be cut and how damaging certain sins of omission can be. The code section, though, kept me engaged for its entirety.  Many technical books can be used as reference material from day one.  The authors were clear, however, that this book is not one of these.  The first chapter of the section (chapter seven, over all) addresses object oriented (OO) practices.  I've read any number of definitions, discussions, and treatises on OO.  None of the chapter was new to me, but it was a good review, and I'm of the opinion that it's good to review the foundations of what you do, from time to time, so I didn't mind. The remainder of the book is really just about how to apply OOP to existing code -- and, just because all your code exists in classes does not mean that it's object oriented.  That topic has the potential to be extremely condescending, but the authors miraculously managed to never once make me feel like a dolt or that they were wagging their finger at me for my prior sins.  Instead, they continue the "pain points" and problem-solution presentation to give concrete examples of how to apply some pretty academic-sounding ideas.  That's a point worth emphasizing, as my experience with most OO discussions is that they stay in the academic realm.  This book gives some very, very good explanations of why things like the Liskov Substitution Principle exist and why a corporate programmer should even care.  Even if you know, with absolute certainty, that you'll never have to work on an existing code-base, I would recommend this book just for the clarity it provides on OOP. This book goes beyond just theory, or even real-world application.  It presents some methods for fixing problems that any developer can, and probably will, encounter in the wild.  First, the authors address refactoring application layers and internal dependencies.  Then, they take you through those layers from the UI to the data access layer and external dependencies.  Finally, they come full circle to tie it all back to the overall process.  By the time the book is done, you're left with a lot of ideas, but also a reasonable plan to begin to improve an existing project structure. Throughout the book, it's apparent that the authors have their own preferred methodology (TDD and domain-driven design), as well as some preferred tools.  The "Our .NET Toolbox" is something of a neon sign pointing to that latter point.  They do not beat the reader over the head with anything resembling a "One True Way" mentality.  Even for the most emphatic points, the tone is quite congenial and helpful.  With some of the near-theological divides that exist within the tech community, I found this to be one of the more remarkable characteristics of the book.  Although the authors favor tools that might be considered Alt.NET, there is no reason the advice and techniques given couldn't be quite successful in a pure Microsoft shop with Team Foundation Server.  For that matter, even though the book specifically addresses .NET, it could be applied to a Java and Oracle shop, as well.

    Read the article

  • How to code Fizzbuzz in F#

    - by Russell
    I am currently learning F# and have tried (an extremely) simple example of FizzBuzz. This is my initial attempt: for x in 1..100 do if x % 3 = 0 && x % 5 = 0 then printfn "FizzBuzz" elif x % 3 = 0 then printfn "Fizz" elif x % 5 = 0 then printfn "Buzz" else printfn "%d" x What solutions could be more elegant/simple/better (explaining why) using F# to solve this problem? Note: The FizzBuzz problem is going through the numbers 1 to 100 and every multiple of 3 prints Fizz, every multiple of 5 prints Buzz, every multiple of both 3 AND 5 prints FizzBuzz. Otherwise, simple the number is displayed. Thanks :)

    Read the article

  • Is there an existing solution to the multithreaded data structure problem?

    - by thr
    I've had the need for a multi-threaded data structure that supports these claims: Allows multiple concurrent readers and writers Is sorted Is easy to reason about Fulfilling multiple readers and one writer is a lot easier, but I really would wan't to allow multiple writers. I've been doing research into this area, and I'm aware of ConcurrentSkipList (by Lea based on work by Fraser and Harris) as it's implemented in Java SE 6. I've also implemented my own version of a concurrent Skip List based on A Provably Correct Scalable Concurrent Skip List by Herlihy, Lev, Luchangco and Shavit. These two implementations are developed by people that are light years smarter then me, but I still (somewhat ashamed, because it is amazing work) have to ask the question if these are the two only viable implementations of a concurrent multi reader/writer data structures available today?

    Read the article

  • How to define a query om a n-m table

    - by user559889
    Hi, I have some troubles defining a query. I have a Product and a Category table. A product can belong to multiple categories and vice versa so there is also a Product-Category table. Now I want to select all products that belong to a certain category. But if the user does not provide a category I want all products. I try to create a query using a join but this results in the product being selected multiple times if it belongs to multiple categories (in the case no specific category is queried). What kind of query do I have to create? Thanks

    Read the article

  • Cross-database transactions from one SP

    - by Michael Bray
    I need to update multiple databases with a few simple SQL statement. The databases are configurared in SQL using 'Linked Servers', and the SQL versions are mixed (SQL 2008, SQL 2005, and SQL 2000). I intend to write a stored procedure in one of the databases, but I would like to do so using a transaction to make sure that each database gets updated consistently. Which of the following is the most accurate: Will a single BEGIN/COMMIT TRANSACTION work to guarantee that all statements across all databases are successful? Will I need multiple BEGIN TRANSACTIONS for each individual set of commands on a database? Are transactions even supported when updating remote databases? I would need to execute a remote SP with embedded transaction support. Note that I don't care about any kind of cross-database referential integrity; I'm just trying to update multiple databases at the same time from a single stored procedure if possible. Any other suggestions are welcome as well. Thanks!

    Read the article

  • Different return XML in a WCF Operation

    - by Sean Hederman
    I am writing a service to a international HTTP standard, and there is one method that can return three different XML results, call them Single, Multiple and Error. Now I've written an IXmlSerializable class that can consume each of these results and generate them. However, WCF seems to insist that I can only have a single return XML root name. I have to choose an XmlRoot for my custom object of either Single, Multiple or Error. How can I set up WCF so that I can choose at runtime what the root will be? This is what I have currently. /// <summary> /// A collection of items. /// </summary> [XmlRoot("Multiple", Namespace = "DAV:")] public sealed class ItemCollection : IEnumerable<Item>, IXmlSerializable /// <summary> /// Processes and returns the items. /// </summary> [WebInvoke(Method = "POST", UriTemplate = "{*path}", BodyStyle = WebMessageBodyStyle.Bare)] [OperationContract] [XmlSerializerFormat] ItemCollection Process(string path);

    Read the article

  • how to compile f# on mono

    - by leon
    I am trying to compile this example in mono on ubuntu. However I get the error wingsit@wingsit-laptop:~/MyFS/kitty$ fsc.exe -o kitty.exe kittyAst.fs kittyParser.fs kittyLexer.fs main.fs Microsoft (R) F# 2.0 Compiler build 2.0.0.0 Copyright (c) Microsoft Corporation. All Rights Reserved. /home/wingsit/MyFS/kitty/kittyAst.fs(1,1): error FS0222: Files in libraries or multiple-file applications must begin with a namespace or module declaration, e.g. 'namespace SomeNamespace.SubNamespace' or 'module SomeNamespace.SomeModule' /home/wingsit/MyFS/kitty/kittyParser.fs(2,1): error FS0222: Files in libraries or multiple-file applications must begin with a namespace or module declaration, e.g. 'namespace SomeNamespace.SubNamespace' or 'module SomeNamespace.SomeModule' /home/wingsit/MyFS/kitty/kittyLexer.fsl(2,1): error FS0222: Files in libraries or multiple-file applications must begin with a namespace or module declaration, e.g. 'namespace SomeNamespace.SubNamespace' or 'module SomeNamespace.SomeModule' wingsit@wingsit-laptop:~/MyFS/kitty$ I am a newbie in F#. Is there something obvious I miss?

    Read the article

  • Advice for a distracted, unhappy, recently graduated programmer? [closed]

    - by Re-Invent
    I graduated 4 months ago. I had offers from a few good places to work at. At the same time I wanted to stick to building a small software business of my own, still have some ideas with good potential, some half done projects frozen in my github. But due to social pressures, I chose a job, the pay is great, but I am half-passionate about it. A small team of smart folks building useful product, working out contracts across the world. I've started finding it extremely boring. Boring to the extent that I skip 2-3 days a week together not doing work. Neither do I spend that time progressing any of my own projects. Yes, I feel stupid at the way I'm wasting time, but I don't understand exactly why is it happening. It's as if all the excitement has been drained. What can I do about it? Long version: School - I was in third standard. Only students, 6th grade had access to computer labs. I once peeked into the lab from the little door opening. No hard-disks, MS DOS on 5 1/2 inch floppies. I asked a senior student to play some sound in BASIC. He used PLAY to compose a tune. Boy! I was so excited, I was jumping from within. Back home, asked my brother to teach me some programming. We bought a book "MODERN All About GW-BASIC for Schools & Colleges". The book had everything, right from printing, to taking input, file i/o, game programming, machine level support, etc. I was in 6th standard, wrote my first game - a wheel of fortune, rotated the wheel by manipulating 16 color palette's definition. Got internet soon, got hooked to QuickBasic programming community. Made some more games "007 in Danger", "Car Crush 2" for submission to allbasiccode archives. I was extremely excited about all this. My interests now swayed into "hacking" (computer security). Taught myself some perl, found it annoying, learnt PHP and a bit of SQL. Also taught myself Visual Basic one of the winters and wrote a pacman clone with Direct X. By the time I was in 10th standard, I created some evil tools using visual basic, php and mysql and eventually landed myself into an unpaid side-job at a government facility, building evil tools for them. It was a dream come true for crackers of that time. And so was I, still very excited. Things changed soon, last two years of school were not so great as I was balancing preps for college, work at govt. and studies for school at same time. College - College was opposite of all I had wished it to be. I imagined it to be a place where I'd spend my 4 years building something awesome. It was rather an epitome of rote learning, attendance, rules, busy schedules, ban on personal laptops, hardly any hackers surrounding you and shit like that. We had to take permissions to even introduce some cultural/creative activities in our annual schedule. The labs won't be open on weekends because the lab employees had to have their leaves. Yes, a horrible place for someone like me. I still managed to pull out a project with a friend over 2 months. Showed it to people high in the academia hierarchy. They were immensely impressed, we proposed to allow personal computers for students. They made up half-assed reasons and didn't agree. We felt frustrated. And so on, I still managed to teach myself new languages, do new projects of my own, do an intern at the same govt. facility, start a small business for sometime, give a talk at a conference I'm passionate about, win game-dev and hacking contest at most respected colleges, solve good deal of programming contest problems, etc. At the same time I was not content with all these restrictions, great emphasis on rote learning, and sheer wastage of time due to college. I never felt I was overdoing, but now I feel I burnt myself out. During my last days at college, I did an intern at a bigco. While I spent my time building prototypes for certain LBS, the other interns around me, even a good friend, was just skipping time. I thought maybe, in a few weeks he would put in some serious efforts at work assigned to him, but all he did was to find creative ways to skip work, hide his face from manager, engage people in talks if they try to question his progress, etc. I tried a few time to get him on track, but it seems all he wanted was to "not to work hard at all and still reap the fruits". I don't know how others take such people, but I find their vicinity very very poisonous to one's own motivation and productivity. Over that, the place where I come from, HRs don't give much value to what have you done past 4 years. So towards the end of out intern, we all were offered work at the bigco, but the slacker, even after not writing more than 200 lines of code was made a much better offer. I felt enraged instantly - "Is this how the corp world treats someone who does fruitful, if not extra-ordinary work form them for past 6 months?". Yes, I did try to negotiate and debate. The bigcos seem blind due to departmentalization of responsibilities and many layers of management. I decided not to be in touch with any characters of that depressing play. Probably the busy time I had at college, ignoring friends, ignoring fun and squeezing every bit of free time for myself is also responsible. Probably this is what has drained all my willingness to work for anyone. I find my day job boring, at the same time I with to maintain it for financial reasons. I feel a bit burnt out, unsatisfied and at the same time an urge to quit working for someone else and start finishing my frozen side-projects (which may be profitable). Though I haven't got much to support myself with food, office, internet bills, etc in savings. I still have my day job, but I don't find it very interesting, even though the pay is higher than the slacker, I don't find money to be a great motivator here. I keep comparing myself to my past version. I wonder how to get rid of this and reboot myself back to the way I was in school days - excited about it, tinkering, building, learning new things daily, and NOT BORED?

    Read the article

  • Oracle global lock across process

    - by Jimm
    I would like to synchronize access to a particular insert. Hence, if multiple applications execute this "one" insert, the inserts should happen one at a time. The reason behind synchronization is that there should only be ONE instance of this entity. If multiple applications try to insert the same entity,only one should succeed and others should fail. One option considered was to create a composite unique key, that would uniquely identify the entity and rely on unique constraint. For some reasons, the dba department rejected this idea. Other option that came to my mind was to create a stored proc for the insert and if the stored proc can obtain a global lock, then multiple applications invoking the same stored proc, though in their seperate database sessions, it is expected that the stored proc can obtain a global lock and hence serialize the inserts. My question is it possible to for a stored proc in oracle version 10/11, to obtain such a lock and any pointers to documentation would be helpful.

    Read the article

  • Multiblog engine for asp.net

    - by Andrey
    I know, different forms of this questions were asked on this site multiple times, but I haven't seen a single answer that would satisfy my need. I need a ASP.NET based blogging engine that wouul use SQL Server as a back end and allow multiple independet blogs in one app instance. I'm writing a community website for major bank and blogging is the piece I'm not sure about. Answers to other questions include a broad spectrum from BlogEngine.NET (doesn't support multiple blogs) to CommunityServer (a beast! blogging is just asmall piece of it). I don't want to install a full-blown CRM and just use blogging, I want a blogging engine. I don't mind to buy a commercial one but I can't find one. I'm pretty much stuck, and any ideas are highly appreciated!

    Read the article

  • Data truncation when retrieving data from MySQL database with prepared statements

    - by KSiimson
    I have a script that retrieves multiple products using prepared statements. Like putting loops into loops, I have prepared statements in prepared statements - so there is a prepared statement for retrieving all products, a prepared statement to retrieve all images for that product, a prepared statement to get all attributes for that products, and so on. This does not work with one MySQLi instance, so I use multiple MySQLi objects that are opened and closed when needed. It usually works fine, but sometimes, especially when displaying multiple products, some data is truncated. For example - MicoLoans becomes MicoLoa. There was an actual spelling mistake here - now when I changed MicoLoans to MicroLoans, the same page displayed MicroLoa... So the same number of characters was truncated from the end. It is sort of consistent where it appears - for example there can be descriptions for 8 products, and description of 1 product is heavily truncated. When I add 9th product, the short description is still truncated for that same product as before. Any ideas?

    Read the article

  • What is the best way to add categories to posts - Ruby on Rails blog...

    - by bgadoci
    I am new to Ruby and Rails so bear with me please. I have created a very simple blog application with both posts and comments. Everything works great. My next question regarding adding categories. I am wondering the best way to do this. As I can't see too far in front of me yet when it comes to Rails I thought I would ask. To be clear, I would like that a single post can have multiple categories and a category can have multiple posts. Is the best way to do this to create a 'categories' table and then use the posts and categories models to do has_many :posts, has_many :categories? Would I also then set the routes.rb such that posts are embedded under categories? Or is there an easier way by simply adding a category column to the existing posts table? (in which case I would imagine having multiple categories would be difficult).

    Read the article

  • What is a .NET managed module?

    - by Abhijeet Patel
    I know it's a Windows PE32, but I also know that the unit of deployment in .NET is an assembly which in turn has a manifest and can be made up of multiple managed modules. My questions are : 1) How would you create multiple managed modules when building a project such as a class lib or a console app etc. 2) Is there a way to specify this to the compiler(via the project properties for example) to partition your source code files into multiple managed modules. If so what is the benefit of doing so? 3)Can managed modules span assemblies? 4)Are separate file created on disk when the source code is compiled or are these created in memory and directly embedded in an assembly?

    Read the article

  • What is a good approach for a Data Access Layer?

    - by Adil Mughal
    Our software is a customized Human Resource Management System (HRMS) using ASP.NET with Oracle as the database and now we are actually moving to make it a product that supports multiple tenants with their own databases. Our options: Use NHibernate to support Multiple databases and use of OO. But we concern related to NHibernate learning curve and any problem we faced. Make a generalized DAL which will continue working with Oracle using stored procedures and use tools to convert it to other databases such as SQL Server or MySql. There is a risk associated with having to support multiple database-dependent versions of a single script. Provide the software as a Service (SaaS) and maintain the way we conduct business. However there can may be clients who do not want or trust the Cloud or other SaaS business models. With this in mind, what's the best Data access layer technique?

    Read the article

  • SQL optional parameters through VB.net

    - by ScaryJones
    I've a document search page with three listboxes that allow multiple selections. They're: Category A Year Category B Only category A is mandatory, the others are optional parameters and might be empty. Each document can belong to multiple options in Category A and multiple options Category B but each document only has one year associated with it. I've kind of got this working through building up a dynamic SQL string but it's messy and I hate using it so I thought I'd ask here if anyone could see an easier way of doing this. An example of the kind of dynamic SQL query i end up with follows: select * from library where libraryID in (select distinct libraryID from categoryAdocs where categoryAdocID in (4)) or year in (2004)

    Read the article

  • Notes on implementing Visual Studio 2010 Navigate To

    - by cyberycon
    One of the many neat functions added to Visual Studio in VS 2010 was the Navigate To feature. You can find it by clicking Edit, Navigate To, or by using the keyboard shortcut Ctrl, (yes, that's control plus the comma key). This pops up the Navigate To dialog that looks like this: As you type, Navigate To starts searching through a number of different search providers for your term. The entries in the list change as you type, with most providers doing some kind of fuzzy or at least substring matching. If you have C#, C++ or Visual Basic projects in your solution, all symbols defined in those projects are searched. There's also a file search provider, which displays all matching filenames from projects in the current solution as well. And, if you have a Visual Studio package of your own, you can implement a provider too. Micro Focus (where I work) provide the Visual COBOL language inside Visual Studio (http://visualstudiogallery.msdn.microsoft.com/ef9bc810-c133-4581-9429-b01420a9ea40 ), and we wanted to provide this functionality too. This post provides some notes on the things I discovered mainly through trial and error, but also with some kind help from devs inside Microsoft. The expectation of Navigate To is that it searches across the whole solution, not just the current project. So in our case, we wanted to search for all COBOL symbols inside all of our Visual COBOL projects inside the solution. So first of all, here's the Microsoft documentation on Navigate To: http://msdn.microsoft.com/en-us/library/ee844862.aspx . It's the reference information on the Microsoft.VisualStudio.Language.NavigateTo.Interfaces Namespace, and it lists all the interfaces you will need to implement to create your own Navigate To provider. Navigate To uses Visual Studio's latest mechanism for integrating external functionality and services, Managed Extensibility Framework (MEF). MEF components don't require any registration with COM or any other registry entries to be found by Visual Studio. Visual Studio looks in several well-known locations for manifest files (extension.vsixmanifest). It then uses reflection to scan for MEF attributes on classes in the assembly to determine which functionality the assembly provides. MEF itself is actually part of the .NET framework, and you can learn more about it here: http://mef.codeplex.com/. To get started with Visual Studio and MEF you could do worse than look at some of the editor examples on the VSX page http://archive.msdn.microsoft.com/vsx . I've also written a small application to help with switching between development and production MEF assemblies, which you can find on Codeproject: http://www.codeproject.com/KB/miscctrl/MEF_Switch.aspx. The Navigate To interfaces Back to Navigate To, and summarizing the MSDN reference documentation, you need to implement the following interfaces: INavigateToItemProviderFactoryThis is Visual Studio's entry point to your Navigate To implementation, and you must decorate your implementation with the following MEF export attribute: [Export(typeof(INavigateToItemProviderFactory))]  INavigateToItemProvider Your INavigateToItemProviderFactory needs to return your implementation of INavigateToItemProvider. This class implements StartSearch() and StopSearch(). StartSearch() is the guts of your provider, and we'll come back to it in a minute. This object also needs to implement IDisposeable(). INavigateToItemDisplayFactory Your INavigateToItemProvider hands back NavigateToItems to the NavigateTo framework. But to give you good control over what appears in the NavigateTo dialog box, these items will be handed back to your INavigateToItemDisplayFactory, which must create objects implementing INavigateToItemDisplay  INavigateToItemDisplay Each of these objects represents one result in the Navigate To dialog box. As well as providing the description and name of the item, this object also has a NavigateTo() method that should be capable of displaying the item in an editor when invoked. Carrying out the search The lifecycle of your INavigateToItemProvider is the same as that of the Navigate To dialog. This dialog is modal, which makes your implementation a little easier because you know that the user can't be changing things in editors and the IDE while this dialog is up. But the Navigate To dialog DOES NOT run on the main UI thread of the IDE – so you need to be aware of that if you want to interact with editors or other parts of the IDE UI. When the user invokes the Navigate To dialog, your INavigateToItemProvider gets sent a TryCreateNavigateToItemProvider() message. Instantiate your INavigateToItemProvider and hand this back. The sequence diagram below shows what happens next. Your INavigateToItemProvider will get called with StartSearch(), and passed an INavigateToCallback. StartSearch() is an asynchronous request – you must return from this method as soon as possible, and conduct your search on a separate thread. For each match to the search term, instantiate a NavigateToItem object and send it to INavigateToCallback.AddItem(). But as the user types in the Search Terms field, NavigateTo will invoke your StartSearch() method repeatedly with the changing search term. When you receive the next StartSearch() message, you have to abandon your current search, and start a new one. You can't rely on receiving a StopSearch() message every time. Finally, when the Navigate To dialog box is closed by the user, you will get a Dispose() message – that's your cue to abandon any uncompleted searches, and dispose any resources you might be using as part of your search. While you conduct your search invoke INavigateToCallback.ReportProgress() occasionally to provide feedback about how close you are to completing the search. There does not appear to be any particular requirement to how often you invoke ReportProgress(), and you report your progress as the ratio of two integers. In my implementation I report progress in terms of the number of symbols I've searched over the total number of symbols in my dictionary, and send a progress report every 16 symbols. Displaying the Results The Navigate to framework invokes INavigateToItemDisplayProvider.CreateItemDisplay() once for each result you passed to the INavigateToCallback. CreateItemDisplay() is passed the NavigateToItem you handed to the callback, and must return an INavigateToItemDisplay object. NavigateToItem is a sealed class which has a few properties, including the name of the symbol. It also has a Tag property, of type object. This enables you to stash away all the information you will need to create your INavigateToItemDisplay, which must implement an INavigateTo() method to display a symbol in an editor IDE when the user double-clicks an entry in the Navigate To dialog box. Since the tag is of type object, it is up to you, the implementor, to decide what kind of object you store in here, and how it enables the retrieval of other information which is not included in the NavigateToItem properties. Some of the INavigateToItemDisplay properties are self-explanatory, but a couple of them are less obvious: Additional informationThe string you return here is displayed inside brackets on the same line as the Name property. In English locales, Visual Studio includes the preposition "of". If you look at the first line in the Navigate To screenshot at the top of this article, Book_WebRole.Default is the additional information for textBookAuthor, and is the namespace qualified type name the symbol appears in. For procedural COBOL code we display the Program Id as the additional information DescriptionItemsYou can use this property to return any textual description you want about the item currently selected. You return a collection of DescriptionItem objects, each of which has a category and description collection of DescriptionRun objects. A DescriptionRun enables you to specify some text, and optional formatting, so you have some control over the appearance of the displayed text. The DescriptionItems property is displayed at the bottom of the Navigate To dialog box, with the Categories on the left and the Descriptions on the right. The Visual COBOL implementation uses it to display more information about the location of an item, making it easier for the user to know disambiguate duplicate names (something there can be a lot of in large COBOL applications). Summary I hope this article is useful for anyone implementing Navigate To. It is a fantastic navigation feature that Microsoft have added to Visual Studio, but at the moment there still don't seem to be any examples on how to implement it, and the reference information on MSDN is a little brief for anyone attempting an implementation.

    Read the article

  • Data scheme question

    - by Matt
    I am designing a data model for a local city page, more like requirements for it. So 4 tables: Country, State, City, neighbourhood. Real life relationships is: Country owns multiple State which owns multiple cities which ows multiple neighbourhoods. In the data model: Do we link these with FK the same way or link each with each? Like in each table there will even be a Country ID, State ID, CityID and NeighbourhoodID so each connected with each? Other wise to reach neighbourhood from country we need to join 2 other tables in between? There are more tables I need to maintain for IP addess of cities, latitude/longitude, etc.

    Read the article

  • File IO, Handling CRLF

    - by aCuria
    Hi, i am writing a program that takes a file and splits it up into multiple small files of a user specified size, then join the multiple small files back again. 1) the code must work for c, c++ 2) i am compiling with multiple compilers. I am reading and writing to the files by using the stl functions fread() and fwrite() The problem I am having pertains to CRLF. If the file I am reading from contains CRLF, then I want to retain it when i split and join the files back together. If the file contains LF, then i want to retain LF. Unfortunately, fread() seems to store CRLF as \n (I think), and whatever is written by fwrite() is compiler-dependent. How do i approach this problem? Thanks.

    Read the article

  • Parallel Task In C#.net

    - by Test123
    I have C#.net application. I wanted to run my application In Thread. But because of third party dll it dont allow to use application in multiThread. There is one object in thrid party dll ,which only allow to create instance at one time only. When i manually run application exe instnace multiple time & process my data it process successfully..(might because of each exe run with its application domain) Same thing i require to implement from C# code. for that i have created dll which can accessible by Type.GetTypeFromProgID()..but multiple dll instnace creating same problem. Is there any way i could achive manual parallelism through code to process same exe code in multiple application domain?

    Read the article

  • How to Embed Image in Outlook Signature?

    - by BlackMael
    Is it possible to create an HTML email signature for Outlook 2003 or above that doesn't reference external images? That is, using those special "cid" reference but embed the image itself in the signature and not on the file system or network. This is for an web application that generates a "standard" email signature based on various input from a user. It has worked fine so far with a single "embedded" image. But a new feature is going to require the possible addition of multiple tiny images. Getting to user to save one email signature template and one image to the user's machine is about the limit of what I'd like to require of the user. But forcing the user to save multiple images seem to be pushing things a little to far in my opinion. So my problem is trying to embed the images into without having to inconvenience the user with multiple downloads first.

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >