Search Results

Search found 16768 results on 671 pages for 'folder compare'.

Page 267/671 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • Outlook 2003 View Control with C#

    - by Eleasar
    I want to embed a custom c# windows form (or WPF) user control into an outlook view. I am using Outlook 2003 and Visual Studio 2008. I did download an example for Outlook 2007 here: http://blogs.msdn.com/e2eblog/archive/2008/01/09/outlook-folder-homepage-hosting-wpf-activex-and-windows-forms-controls.aspx and also here: http://msdn.microsoft.com/en-us/library/aa479345.aspx I tested it and under 2007 it is working, but for 2003 i am getting the following error when i want to open the view: Could not complete the operation due to error 80131509 I can start it from Visual Studio, it is registering the folder just fine, debugging works and all that. It creates an HTML page that contains my type as an object parameter - but the Initialize method that should be called is either not present (not shown via JS) or it has some errors. The breakpoints for RegisterSafeForScripting are also never hit - maybe related to that. Thx in advance!

    Read the article

  • How can I get TFS 2010 to build each project to a separate directory?

    - by Jonathan Schuster
    In our project, we'd like to have our TFS build put each project into its own folder under the drop folder, instead of dropping all of the files into one flat structure. To illustrate, we'd like to see something like this: DropFolder/ Foo/ foo.exe Bar/ bar.dll Baz baz.dll This is basically the same question as was asked here, but now that we're using workflow-based builds, those solutions don't seem to work. The solution using the CustomizableOutDir property looked like it would work best for us, but I can't get that property to be recognized. I customized our workflow to pass it in to MSBuild as a command line argument (/p:CustomizableOutDir=true), but it seems MSBuild just ignores it and puts the output into the OutDir given by the workflow. I looked at the build logs, and I can see that the CustomizableOutDir and OutDir properties are both getting set in the command line args to MSBuild. I still need OutDir to be passed in so that I can copy my files to TeamBuildOutDir at the end. Any idea why my CustomizableOutDir parameter isn't getting recognized, or if there's a better way to achieve this?

    Read the article

  • faking a filesystem / virtual filesystem

    - by attwad
    I have a web service to which users upload python scripts that are run on a server. Those scripts process files that are on the server and I want them to be able to see only a certain hierarchy of the server's filesystem (best: a temporary folder on which I copy the files I want processed and the scripts). The server will ultimately be a linux based one but if a solution is also possible on Windows it would be nice to know how. What I though of is creating a user with restricted access to folders of the FS - ultimately only the folder containing the scripts and files - and launch the python interpreter using this user. Can someone give me a better alternative? as relying only on this makes me feel insecure, I would like a real sandboxing or virtual FS feature where I could run safely untrusted code.

    Read the article

  • Windows Explorer argument not being interpreted?

    - by MarceloRamires
    I'm currently using: Process.Start("explorer.exe", "/Select, " + fullPath); //with file name.extension That basically tells windows explorer to open the folder where the file is in, with the given file selected. But, by the forces of evil, sometimes it works and sometimes it just opens the file ("/select" is... ignored) Has anyone ever experienced this? If it changes anything, it's a file in a local network and the path looks like this \\server\folder\subfolder\something\file.ext (of course in c# every '\' is doubled, and @ is not a choice, the path is generated by something else) I'll be constantly reading comments to supply any aditional information

    Read the article

  • ASP.Net Application Trust Medium File IO Outside Virtual Directory

    - by Trey Gramann
    I am trying to determine how suicidal this is... I have a hosting environment where a custom ASP.Net CMS application needs to access the files in the root folder of a website even though it is in a virtual folder so it can be shared accross many sites. I can modify the Medium trust on the server and came up with this... <IPermission class="FileIOPermission" version="1" Read="$AppDir$;$AppDir$\.." Write="$AppDir$;$AppDir$\.." Append="$AppDir$;$AppDir$\.." PathDiscovery="$AppDir$;$AppDir$\.."/> Oddly enough, it works. Yes, I understand it is doing this for all the Apps. I am a bit at a loss as to easy ways to test what else is being exposed. Feels dangerous. Opinions?

    Read the article

  • Rewrite rules for subfolders

    - by pg
    This may seem like a silly question but I can't figure it out. let's say I have a public_html folder with various folders like: Albatross, Blackbirds, Crows and Faqs. I want to make it so that any traffic to Albatross/faqs.php, Blackbirds/faqs.php, Crows/faqs.php etc will see the file that is at faqs/faqs.php?bird=albatross or faqs/faqs.php?bird=crows or what have you. If I go into the Albatross folder's .htaccess file I can do this RewriteRule faqs.php$ /faqs/faqs.php?cat=albatross[QSA] Which works fine, but I want to put something in the top level .htacces that works for all of them, so tried: RewriteRule faqs.php$ /faqs/faqs.php?cat=albatross[QSA] RewriteRule /(.*)/faqs.php$ /faqs/faqs.php?cat=$1 [QSA] and even RewriteRule /albatross/faqs.php$ /faqs/faqs.php?cat=albatross [QSA] and various others but nothing seems to work, when I go to http://www.birdsandwhatnot.com/albatross/faqs.php I see the same file the same way it's always been. Does the presence of an .htaccess file in the subfolder conflict with the higher up .htaccess file? Am I missing something?

    Read the article

  • PeopleSoft Upgrades, Fusion, & BI for Leading European PeopleSoft Applications Customers

    - by Mark Rosenberg
    With so many industry-leading services firms around the globe managing their businesses with PeopleSoft, it’s always an adventure setting up times and meetings for us to keep in touch with them, especially those outside of North America who often do not get to join us at Oracle OpenWorld. Fortunately, during the first two weeks of May, Nigel Woodland (Oracle’s Service Industries Director for the EMEA region) and I successfully blocked off our calendars to visit seven different customers spanning four countries in Western Europe. We met executives and leaders at four Staffing industry firms, two Professional Services firms that engage in consulting and auditing, and a Financial Services firm. As we shared the latest information regarding product capabilities and plans, we also gained valuable insight into the hot technology topics facing these businesses. What we heard was both informative and inspiring, and I suspect other Oracle PeopleSoft applications customers can benefit from one or more of the following observations from our trip. Great IT Plans Get Executed When You Respect the Users Each of our visits followed roughly the same pattern. After introductions, Nigel outlined Oracle’s product and technology strategy, including a discussion of how we at Oracle invest in each layer of the “technology stack” to provide customers with unprecedented business management capabilities and choice. Then, I provided the specifics of the PeopleSoft product line’s investment strategy, detailing the dramatic number of rich usability and functionality enhancements added to release 9.1 since its general availability in 2009 and the game-changing capabilities slated for 9.2. What was most exciting about each of these discussions was that shortly after my talking about what customers can do with release 9.1 right now to drive up user productivity and satisfaction, I saw the wheels turning in the minds of our audiences. Business analyst and end user-configurable tools and technologies, such as WorkCenters and the Related Action Framework, that provide the ability to tailor a “central command center” to the exact needs of each recruiter, biller, and every other role in the organization were exactly what each of our customers had been looking for. Every one of our audiences agreed that these tools which demonstrate a respect for the user would finally help IT pole vault over the wall of resistance that users had often raised in the past. With these new user-focused capabilities, IT is positioned to definitively partner with the business, instead of drag the business along, to unlock the value of their investment in PeopleSoft. This topic of respecting the user emerged during our very first visit, which was at Vital Services Group at their Head Office “The Mill” in Manchester, England. (If you are a student of architecture and are ever in Manchester, you should stop in to see this amazingly renovated old mill building.) I had just finished explaining our PeopleSoft 9.2 roadmap, and Mike Code, PeopleSoft Systems Manager for this innovative staffing company, said, “Mark, the new features you’ve shown us in 9.1/9.2 are very relevant to our business. As we forge ahead with the 9.1 upgrade, the ability to configure a targeted user interface with WorkCenters, Related Actions, Pivot Grids, and Alerts will enable us to satisfy the business that this upgrade is for them and will deliver tangible benefits. In fact, you’ve highlighted that we need to start talking to the business to keep up the momentum to start reviewing the 9.2 upgrade after we get to 9.1, because as much as 9.1 and PeopleTools 8.52 offers, what you’ve shown us for 9.2 is what we’ve envisioned was ultimately possible with our investment in PeopleSoft applications.” We also received valuable feedback about our investment for the Staffing industry when we visited with Hans Wanders, CIO of Randstad (the second largest Staffing company in the world) in the Netherlands. After our visit, Hans noted, “It was very interesting to see how the PeopleSoft applications have developed. I was truly impressed by many of the new developments.” Hans and Mike, sincere thanks for the validation that our team’s hard work and dedication to “respecting the users” is worth the effort! Co-existence of PeopleSoft and Fusion Applications Just Makes Sense As a “product person,” one of the most rewarding things about visiting customers is that they actually want to talk to me. Sometimes, they want to discuss a product area that we need to enhance; other times, they are interested in learning how to extract more value from their applications; and still others, they want to tell me how they are using the applications to drive real value for the business. During this trip, I was very pleased to hear that several of our customers not only thought the co-existence of Fusion applications alongside PeopleSoft applications made sense in theory, but also that they were aggressively looking at how to deploy one or more Fusion applications alongside their PeopleSoft HCM and FSCM applications. The most common deployment plan in the works by three of the organizations is to upgrade to PeopleSoft 9.1 or 9.2, and then adopt one of the new Fusion HCM applications, such as Fusion Performance Management or the full suite of  Fusion Talent Management. For example, during an applications upgrade planning discussion with the staffing company Hays plc., Mark Thomas, who is Hays’ UK IT Director, commented, “We are very excited about where we can go with the latest versions of the PeopleSoft applications in conjunction with Fusion Talent Management.” Needless to say, this news was very encouraging, because it reiterated that our applications investment strategy makes good business sense for our customers. Next Generation Business Intelligence Is the Key to the Future The third, and perhaps most exciting, lesson I learned during this journey is that our audiences already know that the latest generation of Business Intelligence technologies will be the “secret sauce” for organizations to transform business in radical ways. While a number of the organizations we visited on the trip have deployed or are deploying Oracle Business Intelligence Enterprise Edition and the associated analytics applications to provide dashboards of easy-to-understand, user-configurable metrics that help optimize business performance according to current operating procedures, what’s most exciting to them is being able to use Business Intelligence to change the way an organization does business, grows revenue, and makes a profit. In particular, several executives we met asked whether we can help them minimize the need to have perfectly structured data and at the same time generate analytics that improve order fulfillment decision-making. To them, the path to future growth lies in having the ability to analyze unstructured data rapidly and intuitively and leveraging technology’s ability to detect patterns that a human cannot reasonably be expected to see. For illustrative purposes, here is a good example of a business problem where analyzing a combination of structured and unstructured data can produce better results. If you have a resource manager trying to decide which person would be the best fit for an assignment in terms of ensuring (a) client satisfaction, (b) the individual’s satisfaction with the work, (c) least travel distance, and (d) highest margin, you traditionally compare resource qualifications to assignment needs, calculate margins on past work with the client, and measure distances. To perform these comparisons, you are likely to need the organization to have profiles setup, people ranked against profiles, margin targets setup, margins measured, distances setup, distances measured, and more. As you can imagine, this requires organizations to plan and implement data setup, capture, and quality management initiatives to ensure that dependable information is available to support resourcing analysis and decisions. In the fast-paced, tight-budget world in which most organizations operate today, the effort and discipline required to maintain high-quality, structured data like those described in the above example are certainly not desirable and in some cases are not feasible. You can imagine how intrigued our audiences were when I informed them that we are ready to help them analyze volumes of unstructured data, detect trends, and produce recommendations. Our discussions delved into examples of how the firms could leverage Oracle’s Secure Enterprise Search and Endeca technologies to keyword search against, compare, and learn from unstructured resource and assignment data. We also considered examples of how they could employ Oracle Real-Time Decisions to generate statistically significant recommendations based on similar resourcing scenarios that have produced the desired satisfaction and profit margin results. --- Although I had almost no time for sight-seeing during this trip to Europe, I have to say that it may have been one of the most energizing and engaging trips of my career. Showing these dedicated customers how they can give every user a uniquely tailored set of tools and address business problems in ways that have to date been impossible made the journey across the Atlantic more than worth it. If any of these three topics intrigue you, I’d recommend you contact your Oracle applications representative to arrange for more detailed discussions with the appropriate members of our organization.

    Read the article

  • nmake makefile, linking objects files in a subfolder

    - by Gauthier
    My makefile defines a link command: prod_link = $(LINK) $(LINK_FLAGS) -o$(PROD_OUT) $(PROD_OBJS) where $(PROD_OBJS) is a list of object files of the form: PROD_OBJS = objfile1.obj objfile2.obj objfile3.obj ... objfileN.obj Now the makefile itself is at the root of my project directory. It gets messy to have object and listing files at the root, I'd like to put them in a subfolder. Building and outputing the obj files to a subfolder works, I'm doing it with suffixes and inference: .s.obj: $(ASSEMBLY) $(FLAGS) $*.s -o Objects\$*.obj The problem is to pass the Objects folder to the link command. I tried: prod_link = $(LINK) $(LINK_FLAGS) -o$(PROD_OUT) Objects\$(PROD_OBJS) but only the first file in the list of object files gets the folder's name. How can I pass the Objects subfolder to all files of my list $(PROD_OBJS)? EDIT I tried also PROD_OBJS = $(patsubst %.ss,Object\%.obj, $(PROD_SRC)) but got: makefile(51) : fatal error U1000: syntax error : ')' missing in macro invocation Stop. This is quite strange...

    Read the article

  • Jquery Uploadify checkscript

    - by Kevin McPhail
    I am trying to implement the checkscript feature of uploadify in an asp.net mvc view but i can't determine what the key is i should be using on the controller side. Below is the php script but i am not very familiar with php and can't determine what php is scraping out of the httprequest. Has anyone implemented this? The documentation is a little sparse (as in nonexistent). $fileArray = array(); foreach ($_POST as $key => $value) { if ($key != 'folder') { if (file_exists($_SERVER['DOCUMENT_ROOT'] . $_POST['folder'] . '/' . $value)) { $fileArray[$key] = $value; } } } echo json_encode($fileArray); ?>

    Read the article

  • Open flash chart not working

    - by Axel
    I'm trying to implement Open Flash Chart in my website but it doesn't work. The chart just start loading for a second and the loader animation disappear and nothing happening (Only a black swf area). i've downloaded the latest version which is 2 and here is my folders scheme: // ROOT // ? JS ? open-flash-chart ? php-ofc-library - open-flash-chart.swf - mydata.php - mypage.html This is mydata.php content: {"elements":[{"type":"bar","values":[1,2,3,4,5,6,7,8,9]}],"title":{"text":"Wed Apr 21 2010"}} This is mypage.html content: <script type="text/javascript" src="js/swfobject.js"></script> <script type="text/javascript"> swfobject.embedSWF("open-flash-chart.swf", "my_chart", "550", "200","9.0.0", expressInstall.swf", {"data-file":"mydata.php"} ); </script> <div id="my_chart"></div> The JS folder contain swfobject and the open-flash-chart folder contain the action script classes of the chart Is there any mistake i did? Thanks

    Read the article

  • Using Repository and Unit of Work patterns with Entity Framework 4.0 and MVC 2

    - by Mr. D
    Hi, I'm following this article Using Repository and Unit of Work patterns with Entity Framework 4.0. I'm tying to implement the Repository and Unit of work pattern, using Asp.Net MVC 2 and Entity Framework 4. Please let me know if I'm doing it right... In the Models folder: Northwind.edmx Products.cs (POCO class) ProductRepository.cs (Did my product query) IProductRepository.cs NorthwindContext.cs IUnitOfWork.cs In the Controller folder: ProductController.cs (Retrieve from ProductRepository.cs and Pass it to the view) When I run the application, I'm getting error message: Mapping and metadata information could not be found for EntityType 'NorthwindMvcPoco.Models.Category'. I don't know what I'm doing wrong. I search through whole web and I couldn't resolve this issue. Please help me.

    Read the article

  • Eclipse Could not Delete error

    - by KáGé
    Hello I'm working on a project with Eclipse and by now everything was fine, but last time I've tried building it, it returned the error "The project was not built due to "Could not delete '/Torpedo/bin/bin'.". Fix the problem, then try refreshing this project and building it since it may be inconsistent Torpedo Unknown Java Problem" and it deleted my bin folder which stores all the images and stuff needed for the program. (Fortunately I had a backup). I've tried googling it and tried every solution I found, but nothing helped, and also most of them suggests to delete the folder by hand, which I can't. What should I do? Thanks.

    Read the article

  • ASP.NET Charting Control no longer working with .NET 4

    - by Moose Factory
    I've just upgraded to .NET 4 and my ASP.NET Chart Control no longer displays. For .NET 3.5, the HTML produced by the control used to look like this: <img id="20_Chart" src="/ChartImg.axd?i=chart_5f6a8fd179a246a5a0f4f44fcd7d5e03_0.png&amp;g=16eb7881335e47dcba16fdfd8339ba1a" alt="" style="height:300px;width:300px;border-width:0px;" /> and now, for .NET 4, it looks like this (note the change in the source path): <img id="20_Chart" src="/Statistics/Summary/ChartImg.axd?i=chart_5f6a8fd179a246a5a0f4f44fcd7d5e03_0.png&amp;g=16eb7881335e47dcba16fdfd8339ba1a" alt="" style="height:300px;width:300px;border-width:0px;" /> The chart is in an MVC partial view that is in an MVC Area folder called "Statistics" and a MVC Views folder called "Summary" (i.e. "/Areas/Statistics/Views/Summary"), so this is obviously where the change of path is coming from. All I've done is to switch the System.Web.DataVisualization assembly from, 3.5 to 4.0. Any help greatly appreciated.

    Read the article

  • Create a batch file to copy and rename file

    - by Estate Master
    I have little to no experience in writing batch files. I need to write one that copies a file to a new folder and renames it At the moment, my batch file consists of only this command: COPY ABC.PDF \\Documents As you can see, it only copies the file ABC.pdf to the network folder 'Documents'. However i need to change this so it renames the file ABCxxx.pdf, where xxx is a text variable that i would like to set somewhere in the batch file. For example, if xxx = "_Draft", then file would be renamed ABC_Draft.pdf after it is copied. Thanks

    Read the article

  • Cygwin in Windows 7

    - by Algorist
    Hi, I am a fan of linux but due to worst intel wireless drivers in linux, I had to switch to windows 7. I have installed cygwin in windows and want to configure ssh, to remotely connect to my laptop. I googled and found this webpage, http://art.csoft.net/2009/09/02/cygwin-ssh-server-and-windows-7/ I am getting the following error when running ssh-host-config. bala@bala-PC ~ $ ssh-host-config yes *** Info: Creating default /etc/ssh_config file *** Query: Overwrite existing /etc/sshd_config file? (yes/no) yes *** Info: Creating default /etc/sshd_config file *** Info: Privilege separation is set to yes by default since OpenSSH 3.3. *** Info: However, this requires a non-privileged account called 'sshd'. *** Info: For more info on privilege separation read /usr/share/doc/openssh/READ ME.privsep. *** Query: Should privilege separation be used? (yes/no) no *** Info: Updating /etc/sshd_config file *** Warning: The following functions require administrator privileges! *** Query: Do you want to install sshd as a service? *** Query: (Say "no" if it is already installed as a service) (yes/no) yes *** Query: Enter the value of CYGWIN for the daemon: [] *** Info: On Windows Server 2003, Windows Vista, and above, the *** Info: SYSTEM account cannot setuid to other users -- a capability *** Info: sshd requires. You need to have or to create a privileged *** Info: account. This script will help you do so. *** Warning: The owner and the Administrators need *** Warning: to have .w. permission to /var/run. *** Warning: Here are the current permissions and ACLS: *** Warning: drwxr-xr-x 1 bala None 0 2010-01-17 22:34 /var/run *** Warning: # file: /var/run *** Warning: # owner: bala *** Warning: # group: None *** Warning: user::rwx *** Warning: group::r-x *** Warning: other:r-x *** Warning: mask:rwx *** Warning: *** Warning: Please change the user and/or group ownership, *** Warning: permissions, or ACLs of /var/run. *** ERROR: Problem with /var/run directory. Exiting. The permissions of this folder are shown as Read-only(Only applies to this folder) checked in gray. I tried to uncheck, but after I open the properties again, the box is again checked. Is there a way to change the permissions of this folder. Thank you

    Read the article

  • On StringComparison Values

    - by Jesse
    When you use the .NET Framework’s String.Equals and String.Compare methods do you use an overloStringComparison enumeration value? If not, you should be because the value provided for that StringComparison argument can have a big impact on the results of your string comparison. The StringComparison enumeration defines values that fall into three different major categories: Culture-sensitive comparison using a specific culture, defaulted to the Thread.CurrentThread.CurrentCulture value (StringComparison.CurrentCulture and StringComparison.CurrentCutlureIgnoreCase) Invariant culture comparison (StringComparison.InvariantCulture and StringComparison.InvariantCultureIgnoreCase) Ordinal (byte-by-byte) comparison of  (StringComparison.Ordinal and StringComparison.OrdinalIgnoreCase) There is a lot of great material available that detail the technical ins and outs of these different string comparison approaches. If you’re at all interested in the topic these two MSDN articles are worth a read: Best Practices For Using Strings in the .NET Framework: http://msdn.microsoft.com/en-us/library/dd465121.aspx How To Compare Strings: http://msdn.microsoft.com/en-us/library/cc165449.aspx Those articles cover the technical details of string comparison well enough that I’m not going to reiterate them here other than to say that the upshot is that you typically want to use the culture-sensitive comparison whenever you’re comparing strings that were entered by or will be displayed to users and the ordinal comparison in nearly all other cases. So where does that leave the invariant culture comparisons? The “Best Practices For Using Strings in the .NET Framework” article has the following to say: “On balance, the invariant culture has very few properties that make it useful for comparison. It does comparison in a linguistically relevant manner, which prevents it from guaranteeing full symbolic equivalence, but it is not the choice for display in any culture. One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display. For example, if a large data file that contains a list of sorted identifiers for display accompanies an application, adding to this list would require an insertion with invariant-style sorting.” I don’t know about you, but I feel like that paragraph is a bit lacking. Are there really any “real world” reasons to use the invariant culture comparison? I think the answer to this question is, “yes”, but in order to understand why we should first think about what the invariant culture comparison really does. The invariant culture comparison is really just a culture-sensitive comparison using a special invariant culture (Michael Kaplan has a great post on the history of the invariant culture on his blog: http://blogs.msdn.com/b/michkap/archive/2004/12/29/344136.aspx). This means that the invariant culture comparison will apply the linguistic customs defined by the invariant culture which are guaranteed not to differ between different machines or execution contexts. This sort of consistently does prove useful if you needed to maintain a list of strings that are sorted in a meaningful and consistent way regardless of the user viewing them or the machine on which they are being viewed. Example: Prototype Names Let’s say that you work for a large multi-national toy company with branch offices in 10 different countries. Each year the company would work on 15-25 new toy prototypes each of which is assigned a “code name” while it is under development. Coming up with fun new code names is a big part of the company culture that everyone really enjoys, so to be fair the CEO of the company spent a lot of time coming up with a prototype naming scheme that would be fun for everyone to participate in, fair to all of the different branch locations, and accessible to all members of the organization regardless of the country they were from and the language that they spoke. Each new prototype will get a code name that begins with a letter following the previously created name using the alphabetical order of the Latin/Roman alphabet. Each new year prototype names would start back at “A”. The country that leads the prototype development effort gets to choose the name in their native language. (An appropriate Romanization system will be used for countries where the primary language is not written in the Latin/Roman alphabet. For example, the Pinyin system could be used for Chinese). To avoid repeating names, a list of all current and past prototype names will be maintained on each branch location’s company intranet site. Assuming that maintaining a single pre-sorted list is not feasible among all of the highly distributed intranet implementations, what string comparison method would you use to sort each year’s list of prototype names so that the list is both meaningful and consistent regardless of the country within which the list is being viewed? Sorting the list with a culture-sensitive comparison using the default configured culture on each country’s intranet server the list would probably work most of the time, but subtle differences between cultures could mean that two different people would see a list that was sorted slightly differently. The CEO wants the prototype names to be a unifying aspect of company culture and is adamant that everyone see the the same list sorted in the same order and there’s no way to guarantee a consistent sort across different cultures using the culture-sensitive string comparison rules. The culture-sensitive sort would produce a meaningful list for the specific user viewing it, but it wouldn’t always be consistent between different users. Sorting with the ordinal comparison would certainly be consistent regardless of the user viewing it, but would it be meaningful? Let’s say that the current year’s prototype name list looks like this: Antílope (Spanish) Babouin (French) Cahoun (Czech) Diamond (English) Flosse (German) If you were to sort this list using ordinal rules you’d end up with: Antílope Babouin Diamond Flosse Cahoun This sort is no good because the entry for “C” appears the bottom of the list after “F”. This is because the Czech entry for the letter “C” makes use of a diacritic (accent mark). The ordinal string comparison does a byte-by-byte comparison of the code points that make up each character in the string and the code point for the “C” with the diacritic mark is higher than any letter without a diacritic mark, which pushes that entry to the bottom of the sorted list. The CEO wants each country to be able to create prototype names in their native language, which means we need to allow for names that might begin with letters that have diacritics, so ordinal sorting kills the meaningfulness of the list. As it turns out, this situation is actually well-suited for the invariant culture comparison. The invariant culture accounts for linguistically relevant factors like the use of diacritics but will provide a consistent sort across all machines that perform the sort. Now that we’ve walked through this example, the following line from the “Best Practices For Using Strings in the .NET Framework” makes a lot more sense: One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display That line describes the prototype name example perfectly: we need a way to persist ordered data for a cross-culturally identical display. While this example is 100% made-up, I think it illustrates that there are indeed real-world situations where the invariant culture comparison is useful.

    Read the article

  • Eclipse: What is the minimum Eclipse installation needed for a headless PDE build?

    - by Christoph
    Hi, I am currently using PDE build in headless mode to build my OSGI Bundle project. The PDE Antrunner task uses an Eclipse installation and I am just pointing it to my local Eclipse installation. unfortunatelly my eclipse installation is about 260MB big, but I assume that a PDE build does NOT require all of those plugins in a standard eclipse installation. Does anyone now what is the minimum list of plugins I need for doing a headless PDE build? All of my dependencies I actually have in a custom target platform folder, so I guess the only thing I need from my eclipse installation are the dependencies which PDE build actually needs. But what are those? Can I shrink my installation to a very minimum? My goal is to also check-in this "build-eclipse" folder into my project's SVN so that when you check it out, you have everything you need to start a full build, without touching any build.properties. But I don't want to commit 266MB of eclipse if I maybe need only 20MB of it. Thanks Christoph

    Read the article

  • NSFileManager - Copying Files at Startup

    - by David Schiefer
    Hi, I need to copy a few sample files from my app's resource folder and place them in my app's document folder. I came up with the attached code, it compiles fine but it doesn't work. All the directories I refer to do exist. I'm not quite sure what I am doing wrong, could someone point me in the right direction please? NSFileManager*manager = [NSFileManager defaultManager]; NSString*dirToCopyTo = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents"]; NSString*path = [[NSBundle mainBundle] resourcePath]; NSString*dirToCopyFrom = [path stringByAppendingPathComponent:@"Samples"]; NSError*error; NSArray*files = [manager contentsOfDirectoryAtPath:dirToCopyTo error:nil]; for (NSString *file in files) { [manager copyItemAtPath:[dirToCopyFrom stringByAppendingPathComponent:file] toPath:dirToCopyTo error:&error]; if (error) { NSLog(@"%@",[error localizedDescription]); } }

    Read the article

  • TFS 2010 SDK: Smart Merge - Programmatically Create your own Merge Tool

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,TFS SDK,TFS API,TFS Merge Programmatically,TFS Work Items Programmatically,TFS Administration Console,ALM   The information available in the Merge window in Team Foundation Server 2010 is very important in the decision making during the merging process. However, at present the merge window shows very limited information, more that often you are interested to know the work item, files modified, code reviewer notes, policies overridden, etc associated with the change set. Our friends at Microsoft are working hard to change the game again with vNext, but because at present the merge window is a model window you have to cancel the merge process and go back one after the other to check the additional information you need. If you can relate to what i am saying, you will enjoy this blog post! I will show you how to programmatically create your own merging window using the TFS 2010 API. A few screen shots of the WPF TFS 2010 API – Custom Merging Application that we will be creating programmatically, Excited??? Let’s start coding… 1. Get All Team Project Collections for the TFS Server You can read more on connecting to TFS programmatically on my blog post => How to connect to TFS Programmatically 1: public static ReadOnlyCollection<CatalogNode> GetAllTeamProjectCollections() 2: { 3: TfsConfigurationServer configurationServer = 4: TfsConfigurationServerFactory. 5: GetConfigurationServer(new Uri("http://xxx:8080/tfs/")); 6: 7: CatalogNode catalogNode = configurationServer.CatalogNode; 8: return catalogNode.QueryChildren(new Guid[] 9: { CatalogResourceTypes.ProjectCollection }, 10: false, CatalogQueryOptions.None); 11: } 2. Get All Team Projects for the selected Team Project Collection You can read more on connecting to TFS programmatically on my blog post => How to connect to TFS Programmatically 1: public static ReadOnlyCollection<CatalogNode> GetTeamProjects(string instanceId) 2: { 3: ReadOnlyCollection<CatalogNode> teamProjects = null; 4: 5: TfsConfigurationServer configurationServer = 6: TfsConfigurationServerFactory.GetConfigurationServer(new Uri("http://xxx:8080/tfs/")); 7: 8: CatalogNode catalogNode = configurationServer.CatalogNode; 9: var teamProjectCollections = catalogNode.QueryChildren(new Guid[] {CatalogResourceTypes.ProjectCollection }, 10: false, CatalogQueryOptions.None); 11: 12: foreach (var teamProjectCollection in teamProjectCollections) 13: { 14: if (string.Compare(teamProjectCollection.Resource.Properties["InstanceId"], instanceId, true) == 0) 15: { 16: teamProjects = teamProjectCollection.QueryChildren(new Guid[] { CatalogResourceTypes.TeamProject }, false, 17: CatalogQueryOptions.None); 18: } 19: } 20: 21: return teamProjects; 22: } 3. Get All Branches with in a Team Project programmatically I will be passing the name of the Team Project for which i want to retrieve all the branches. When consuming the ‘Version Control Service’ you have the method QueryRootBranchObjects, you need to pass the recursion type => none, one, full. Full implies you are interested in all branches under that root branch. 1: public static List<BranchObject> GetParentBranch(string projectName) 2: { 3: var branches = new List<BranchObject>(); 4: 5: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<teamProjectName>")); 6: var versionControl = tfs.GetService<VersionControlServer>(); 7: 8: var allBranches = versionControl.QueryRootBranchObjects(RecursionType.Full); 9: 10: foreach (var branchObject in allBranches) 11: { 12: if (branchObject.Properties.RootItem.Item.ToUpper().Contains(projectName.ToUpper())) 13: { 14: branches.Add(branchObject); 15: } 16: } 17: 18: return branches; 19: } 4. Get All Branches associated to the Parent Branch Programmatically Now that we have the parent branch, it is important to retrieve all child branches of that parent branch. Lets see how we can achieve this using the TFS API. 1: public static List<ItemIdentifier> GetChildBranch(string parentBranch) 2: { 3: var branches = new List<ItemIdentifier>(); 4: 5: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<CollectionName>")); 6: var versionControl = tfs.GetService<VersionControlServer>(); 7: 8: var i = new ItemIdentifier(parentBranch); 9: var allBranches = 10: versionControl.QueryBranchObjects(i, RecursionType.None); 11: 12: foreach (var branchObject in allBranches) 13: { 14: foreach (var childBranche in branchObject.ChildBranches) 15: { 16: branches.Add(childBranche); 17: } 18: } 19: 20: return branches; 21: } 5. Get Merge candidates between two branches Programmatically Now that we have the parent and the child branch that we are interested to perform a merge between we will use the method ‘GetMergeCandidates’ in the namespace ‘Microsoft.TeamFoundation.VersionControl.Client’ => http://msdn.microsoft.com/en-us/library/bb138934(v=VS.100).aspx 1: public static MergeCandidate[] GetMergeCandidates(string fromBranch, string toBranch) 2: { 3: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<CollectionName>")); 4: var versionControl = tfs.GetService<VersionControlServer>(); 5: 6: return versionControl.GetMergeCandidates(fromBranch, toBranch, RecursionType.Full); 7: } 6. Get changeset details Programatically Now that we have the changeset id that we are interested in, we can get details of the changeset. The Changeset object contains the properties => http://msdn.microsoft.com/en-us/library/microsoft.teamfoundation.versioncontrol.client.changeset.aspx - Changes: Gets or sets an array of Change objects that comprise this changeset. - CheckinNote: Gets or sets the check-in note of the changeset. - Comment: Gets or sets the comment of the changeset. - PolicyOverride: Gets or sets the policy override information of this changeset. - WorkItems: Gets an array of work items that are associated with this changeset. 1: public static Changeset GetChangeSetDetails(int changeSetId) 2: { 3: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<CollectionName>")); 4: var versionControl = tfs.GetService<VersionControlServer>(); 5: 6: return versionControl.GetChangeset(changeSetId); 7: } 7. Possibilities In future posts i will try and extend this idea to explore further possibilities, but few features that i am sure will further help during the merge decision making process would be, - View changed files - Compare modified file with current/previous version - Merge Preview - Last Merge date Any other features that you can think of?

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 2

    - by AjarnMark
    In Part 1, I started talking about using Red-Gate’s newest version of SQL Source Control and how I really like it as a viable method to source control your database development.  It looks like this is going to turn into a little series where I will explain how we have done things in the past, and how life is different with SQL Source Control.  I will also explain some of my philosophy and methodology around deployment with these tools.  But for now, let’s talk about some of the good and the bad of the tool itself. More Kudos and Features I mentioned previously how impressed I was with the responsiveness of Red-Gate’s team.  I have been having an ongoing email conversation with Gyorgy Pocsi, and as I have run into problems or requested things behave a little differently, it has not been more than a day or two before a new Build is ready for me to download and test.  Quite impressive! I’m sure much of the requests I put in were already in the plans, so I can’t really take credit for them, but throughout this conversation, Red-Gate has implemented several features that were not in the first Early Access version.  Those include: Honoring the Fortress configuration option to require Work Item (Bug) IDs on check-ins. Adding the check-in comment text as a comment to the Work Item. Adding the list of checked-in files, along with the Fortress links for automatic History and DIFF view Updating the status of a Work Item on check-in (e.g. setting the item to Complete or, in our case “Dev-Complete”) Support for the Fortress 2.0 API, and not just the Vault Pro 5.1 API.  (See later notes regarding support for Fortress 2.0). These were all features that I felt we really needed to have in-place before I could honestly consider converting my team to using SQL Source Control on a regular basis.  Now that I have those, my only excuse is not wanting to switch boats on the team mid-stream.  So when we wrap up our current release in a few weeks, we will make the jump.  In the meantime, I will continue to bang on it to make sure it is stable.  It passed one test for stability when I did a test load of one of our larger database schemas into Fortress with SQL Source Control.  That database has about 150 tables, 200 User-Defined Functions and nearly 900 Stored Procedures.  The initial load to source control went smoothly and took just a brief amount of time. Warnings Remember that this IS still in pre-release stage and while I have not had any problems after that first hiccup I wrote about last time, you still need to treat it with a healthy respect.  As I understand it, the RTM is targeted for February.  There are a couple more features that I hope make it into the final release version, but if not, they’ll probably be coming soon thereafter.  Those are: A Browse feature to let me lookup the Work Item ID instead of having to remember it or look back in my Item details.  This is just a matter of convenience. I normally have my Work Item list open anyway, so I can easily look it up, but hey, why not make it even easier. A multi-line comment area.  The current space for writing check-in comments is a single-line text box.  I would like to have a multi-line space as I sometimes write lengthy commentary.  But I recognize that it is a struggle to get most developers to put in more than the word “fixed” as their comment, so this meets the need of the majority as-is, and it’s not a show-stopper for us. Merge.  SQL Source Control currently does not have a Merge feature.  If two or more people make changes to the same database object, you will get a warning of the conflict and have to choose which one wins (and then manually edit to include the others’ changes).  I think it unlikely you will run into actual conflicts in Stored Procedures and Functions, but you might with Views or Tables.  This will be nice to have, but I’m not losing any sleep over it.  And I have multiple tools at my disposal to do merges manually, so really not a show-stopper for us. Automation has its limits.  As cool as this automation is, it has its limits and there are some changes that you will be better off scripting yourself.  For example, if you are refactoring table definitions, and want to change a column name, you can write that as a quick sp_rename command and preserve the data within that column.  But because this tool is looking just at a before and after picture, it cannot tell that you just renamed a column.  To the tool, it looks like you dropped one column and added another.  This is not a knock against Red-Gate.  All automated scripting tools have this issue, unless the are actively monitoring your every step to know exactly what you are doing.  This means that when you go to Deploy your changes, SQL Compare will script the change as a column drop and add, or will attempt to rebuild the entire table.  Unfortunately, neither of these approaches will preserve the existing data in that column the way an sp_rename will, and so you are better off scripting that change yourself.  Thankfully, SQL Compare will produce warnings about the potential loss of data before it does the actual synchronization and give you a chance to intercept the script and do it yourself. Also, please note that the current official word is that SQL Source Control supports Vault Professional 5.1 and later.  Vault Professional is the new name for what was previously known as Fortress.  (You can read about the name change on SourceGear’s site.)  The last version of Fortress was 2.x, and the API for Fortress 2.x is different from the API for Vault Pro.  At my company, we are currently running Fortress 2.0, with plans to upgrade to Vault Pro early next year.  Gyorgy was able to come up with a work-around for me to be able to use SQL Source Control with Fortress 2.0, even though it is not officially supported.  If you are using Fortress 2.0 and want to use SQL Source Control, be aware that this is not officially supported, but it is working for us, and you can probably get the work-around instructions from Red-Gate if you’re really, really nice to them. Upcoming Topics Some of the other topics I will likely cover in this series over the next few weeks are: How we used to do source control back in the old days (a few weeks ago) before SQL Source Control was available to Vault users What happens when you restore a database that is linked to source control Handling multiple development branches of source code Concurrent Development practices and handling Conflicts Deployment Tips and Best Practices A recap after using the tool for a while

    Read the article

  • Problem: adding feature in MOSS 2007

    - by Anoop
    Hi All, I have added a menu item in 'Actions' menu of a document library as follows(Using features: explained in MSDN How to: Add Actions to the User Interface: http://msdn.microsoft.com/en-us/library/ms473643.aspx): feature.xml as follows: <?xml version="1.0" encoding="utf-8" ?> <Feature Id="51DEF381-F853-4991-B8DC-EBFB485BACDA" Title="Import From REP" Description="This example shows how you can customize various areas inside Windows SharePoint Services." Version="1.0.0.0" Scope="Site" xmlns="http://schemas.microsoft.com/sharepoint/"> <ElementManifests> <ElementManifest Location="ImportFromREP.xml" /> </ElementManifests> </Feature> ImportFromREP.xml is as follows: <?xml version="1.0" encoding="utf-8"?> <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <!-- Document Library Toolbar Actions Menu Dropdown --> <CustomAction Id="ImportFromREP" RegistrationType="List" RegistrationId = "101" GroupId="ActionsMenu" Rights="AddListItems" Location="Microsoft.SharePoint.StandardMenu" Sequence="1000" Title="Import From REP"> <UrlAction Url="/_layouts/ImportFromREP.aspx?ActionsMenu"/> </CustomAction> </Elements> I have successfully installed and activated the feature. Normal case: Now if i login with the user having atleast 'Contribute' permission on the document library, i am able to see the menu item 'Import From REP' (in 'Actions' menu)in root folder as well as in all subfolders. Problem case: If user is having view rights on the document library but add/delete rights on a perticular subfolder(say 'subfolder') inside the document library: Menu item 'Import From REP' is not visible on root folder level as well as 'Upload' menu is also not visible. which is o.k. because user does not have AddListItems right on root folder but it is also not visible in 'subfolder' while 'Upload' menu is visible as user has add/delete rights on the 'subfolder'. did i mention a wrong value of attribute Rights(Rights="AddListItems") in the xml file?? If so, what should be the correct value? What should i do to simulate the behaviour of 'Upload' menu??(i.e. when 'Upload' menu is visible, 'Import From REP' menu is visible and when 'Upload' menu is not visible 'Import From REP' menu is also not visible. ) Thanks in advance Anoop

    Read the article

  • Win32 API Question

    - by Lalit_M
    We have developed a ASP.NET web application and has implemented a custom authentication solution using active directory as the credentials store. Our front end application uses a normal login form to capture the user name and password and leverages the Win32 LogonUser method to authenticate the user’s credentials. When we are calling the LogonUser method, we are using the LOGON32_LOGON_NETWORK as the logon type. The issue we have found is that user profile folders are being created under the C:\Users folder of the web server. The folder seems to be created when a new user who has never logged on before is logging in for the first time. As the number of new users logging into the application grows, disk space is shrinking due to the large number of new user folders getting created. Has anyone seen this behavior with the Win32 LogonUser method? Does anyone know how to disable this behavior?

    Read the article

  • Deployment of SQL compact Edition (SDF files) using Setup project

    - by Emad
    Hi, I have a C#.NET desktop application using SQL Compact edition as data store. The application should be used by any user on the machine and all should be seeing the same data ( data should not different per user). I am wondering where should I deploy the SDF file? User's Personal data folder (My Documents) means each user will have a separate database. Deploying on the same folder as the application causes vista to copy the file to \USers\Appdata\local\VirtualStore\ and it seems to make different copies for each user. Where is it best to deploy the SDF file to ensure all users are looking at the same data?

    Read the article

  • Netbeans not Deploying/Debugging

    - by Triztian
    Hello all, I have this problem, it's the second time it happens to me, I'm using the NB IDE 6.9.1 and after a while I was working fine, but then It stopped deploying my webapp when debugging or running the application, I don't recall messing with the ant build scripts or any strange configurations, it successfully builds the dist folder and *.war file and I am able to deploy it manually, but when I run it through the IDE it deletes the webapp folder webapps/myapp,so I cannot debug it or run it, all I get is ClassNotFoundException for my servlets. I am unsure of what extra information would help to solve this issue Its pretty annoying, I don't know how it got solved the last time, it just "went away" any help is truly appreciated.

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >