Search Results

Search found 21322 results on 853 pages for 'vs 2008'.

Page 822/853 | < Previous Page | 818 819 820 821 822 823 824 825 826 827 828 829  | Next Page >

  • Is this a bug? Or is it a setting in ASP.NET 4 (or MVC 2)?

    - by John Gietzen
    I just recently started trying out T4MVC and I like the idea of eliminating magic strings. However, when trying to use it on my master page for my stylesheets, I get this: <link href="<%: Links.Content.site_css %>" rel="stylesheet" type="text/css" /> rending like this: <link href="&lt;%: Links.Content.site_css %>" rel="stylesheet" type="text/css" /> Whereas these render correctly: <link href="<%: Url.Content("~/Content/Site.css") %>" rel="stylesheet" type="text/css" /> <link href="<%: Links.Content.site_css + "" %>" rel="stylesheet" type="text/css" /> It appears that, as long as I have double quotes inside of the code segment, it works. But when I put anything else in there, it escapes the leading "less than". Is this something I can turn off? Is this a bug? Edit: This does not happen for <script src="..." ... />, nor does it happen for <a href="...">. Edit 2: Minimal case: <link href="<%: string.Empty %>" /> vs <link href="<%: "" %>" />

    Read the article

  • Why is this unordered list formatting differently in IE7?

    - by Joel
    I'm better about getting things to look good in IE8, FF, and Safari, but IE7 still throws curve balls at me... Please check out this page and scroll down below the nav bar: http://rattletree.com/instruments.php It should become obvious when viewing in FF vs IE7. For some reason the formatting of the list is pushing the list items down on the page... any tips? <ul class="instrument"> <li class="imagebox"><img src="/images/stuff.jpg" width="247" height="228" alt="Matepe" /></li> <li class="textbox"><h3>Matepe</h3><p>This text should be to the right of the image but drops below the image in IE7</p></li> </ul> css: ul.instrument { text-align:left; display:inline-block; } ul.instrument li { list-style-type: none; display:inline-block; } li.imagebox { display:inline; margin:20px 0; padding:0px; vertical-align:top; } li.imagebox img{ border: solid black 1px; } li.textbox { display:inline; } li.textbox p{ margin:10px; width:340px; display:inline-block; }

    Read the article

  • Parameter meaning of CBasePin::GetMediaType(int iPosition, ...) method

    - by user325320
    Thanks to everyone who views my question. http://msdn.microsoft.com/en-us/library/windows/desktop/dd368709(v=vs.85).aspx It is not very clear from the documentation regarding the iPosition parameter for virtual HRESULT GetMediaType( int iPosition, CMediaType *pMediaType ); It is said "Zero-based index value.", but what kind of index it is? the index of the samples? I have a source filter sending the H.264 NALU flows (MEDIASUBTYPE_AVC1) and it works very well except that the SPS/PPS may be changed after the video is played for a while. The SPS and PPS are appended to the MPEG2VIDEOINFO structure, which is passed in CMediaType::SetFormat method when GetMediaType method is called. and there is another version of GetMediaType which accepts the iPosition parameter. It seems I can use this method to update the SPS / PPS. My question is: What does the iPosition param mean, and how does Decoder Filter know which SPS/PPS are assigned for each NALU sample. HRESULT GetMediaType(int iPosition, CMediaType *pMediaType) { ATLTRACE( "\nGetMediaType( iPosition = %d ) ", iPosition); CheckPointer(pMediaType,E_POINTER); CAutoLock lock(m_pFilter->pStateLock()); if (iPosition < 0) { return E_INVALIDARG; } if (iPosition == 0) { pMediaType->InitMediaType(); pMediaType->SetType(&MEDIATYPE_Video); pMediaType->SetFormatType(&FORMAT_MPEG2Video); pMediaType->SetSubtype(&MEDIASUBTYPE_AVC1); pMediaType->SetVariableSize(); } int nCurrentSampleID; DWORD dwSize = m_pFlvFile->GetVideoFormatBufferSize(nCurrentSampleID); LPBYTE pBuffer = pMediaType->ReallocFormatBuffer(dwSize); memcpy( pBuffer, m_pFlvFile->GetVideoFormatBuffer(nCurrentSampleID), dwSize); pMediaType->SetFormat(pBuffer, dwSize); return S_OK; }

    Read the article

  • TFS2010 API - Which server event fires when checkin notes are changed?

    - by user3708981
    I've written a TFS plugin that impliments the ISubscribe interface, and creates an external ticket base off of the contents of a check-in note. What I would like to do, if when I go back through older TFS check-ins in VS and edit a check-in note, the plugin would process that event and create an external ticket retroactively. What event / SubscribedType do I need to subscribe to in order for ProcessEvents to fire? My stubbed out code - using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Common; using Microsoft.TeamFoundation.VersionControl.Client; // From C:\Program Files\Microsoft Team Foundation Server 2010\Tools\ using Microsoft.TeamFoundation.Framework.Server; using Microsoft.TeamFoundation.VersionControl.Server; using Changeset = Microsoft.TeamFoundation.VersionControl.Server.Changeset; public class EmbeddedWorkItemEventHandler : ISubscriber { const string EVENT_NAME = "TicketEvent"; const string APP_LOG = "Application"; public Type[] SubscribedTypes() { return new Type[1] { typeof(CheckinNotification) }; // What else do I need here? } public string Name { get { return EVENT_NAME; } } public SubscriberPriority Priority { get { return SubscriberPriority.Normal; } } public EventNotificationStatus ProcessEvent(TeamFoundationRequestContext requestContext, NotificationType notificationType, object notificationEventArgs, out int statusCode, out string statusMessage, out ExceptionPropertyCollection properties) { // Create the event source, if it doesn't exist if (!System.Diagnostics.EventLog.SourceExists(EVENT_NAME)) { System.Diagnostics.EventLog.CreateEventSource(EVENT_NAME, APP_LOG); } statusCode = 0; properties = null; statusMessage = String.Empty; string ErrorLine = ""; try { // Here we'll validate the Ticket name if (notificationType == NotificationType.DecisionPoint && notificationEventArgs is CheckinNotification) { //Check-in blocking logic here. } else if (notificationType == NotificationType.Notification && notificationEventArgs is CheckinNotification) { // Tickets on check-in here. } } Catch { // Error checking } return EventNotificationStatus.ActionPermitted; }

    Read the article

  • Are certain open-source licenses more suitable than others for career growth?

    - by Francisco Garcia
    As a software engineer/programmer myself, I love the possibility to download the code and learn from it. However building software is what brings food to my table. I have doubts regarding the type of license I should use for my own personal projects or when picking up one project to learn from. There are already many questions about licenses on Stackoverflow, but I would like to make this one much more specific. If your main profession and way of living is building software: which type of license do you find more useful for you? And I mean, the license that can benefit you most as a professional because it gives you more freedom to reuse the experience you gain. GPL is a great license to build communities because it forces you to give back your work. However I like BSD licenses because of their extra freedom. I know that if the code I am exploring is BSD licensed, I might be able to expand not only my skills, but also my programmer toolbox. Whenever I am working for a company, I might recall that something similar was done in another project and I will be able to copy or imitate certain part of the code. I know that there are religious wars regarding GPL vs BSD and it is not my intention to start one. Probably many companies already take snipsets from GPL projects anyway. I just want to insist in the factor of professional enrichment. I do not intend to discriminate any license. I said I prefer BSD licenses but I also use Linux because the user base is bigger and also the market demand.

    Read the article

  • Does ASP.NET Tracing work in MVC2 Views?

    - by AUSTX_RJL
    I have a VS 2010 MVC2 .NET 4.0 web application. ASP.NET tracing is enabled both in the Page directive (Trace="true) and in the Web.config: <trace enabled="true" requestLimit="10" pageOutput="true" traceMode="SortByTime" localOnly="true" writeToDiagnosticsTrace="true" /> A standard trace listener is also configured in the Web.config: <trace autoflush="true" indentsize="4"> <listeners> <add name="WebPageTrace" type="System.Web.WebPageTraceListener, System.Web, Version=4.0.30319.1, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> <add name="TextWriterTrace" type="System.Diagnostics.TextWriterTraceListener" initializeData="textListener.log" /> </listeners> </trace> Tracing works fine from the controller, but when I add a Trace in the View (.aspx) nothing ever shows: <% System.Diagnostics.Trace.WriteLine("Message System.Diagnostics.Trace from View"); %> <% Page.Trace.Write("Message Page.Trace from View"); %> Is this supposed to work? Is there something else that is needed to enable Tracing from a View? Thanks

    Read the article

  • Does new JUnit 4.8 @Category render test suites almost obsolete?

    - by grigory
    Given question 'How to run all tests belonging to a certain Category?' and the answer would the following approach be better for test organization? define master test suite that contains all tests (e.g. using ClasspathSuite) design sufficient set of JUnit categories (sufficient means that every desirable collection of sets is identifiable using one or more categories) define targeted test suites based on master test suite and set of categories For example: identify categories for speed (slow, fast), dependencies (mock, database, integration), function (), domain ( demand that each test is properly qualified (tagged) with relevant set of categories. create master test suite using ClasspathSuite (all tests found in classpath) create targeted suites by qualifying master test suite with categories, e.g. mock test suite, fast database test suite, slow integration for domain X test suite, etc. My question is more like soliciting approval rate for such approach vs. classic test suite approach. One unbeatable benefit is that every new test is immediately contained by relevant suites with no suite maintenance. One concern is proper categorization of each test.

    Read the article

  • WSDL first for existing service layer

    - by Jurgen H
    I am working on an existing Java project with a typical services - dao setup for which only a webapplication was available. My job is to add webservices on top of the services layer, but the webservices have their own functional analysis and datamodel. The functional analyses ofcource focuses on what is possible in the different service methods. As good practice demands, we used the WSDL first strategy and generated JAXB bound Java classes and a SEI for the webservices. After having implemented the webservices partially, we noticed a 70% match between the datamodel. This resulted in writing converters which take the webservice JAXB classes and map them with the service layer classes. Customer customer = new Customer(); customer.setName(wsCustomer.getName()); customer.setFirstName(wsCustomer.getFirstName(); .. This is a very obvious example, some other mappings where little more complicated. Can anyone give his best practices, experiences, solutions to this kind of situations? Are any of these frameworks usefull? http://transmorph.sourceforge.net/wiki/index.php/Main_Page http://ezmorph.sourceforge.net/ Please don't start a discussion about WSDL first vs code first.

    Read the article

  • Crash using WscRegisterForChanges.

    - by user335126
    I'm trying to use the WscRegisterForChanges with C++ function in Windows 7. Documentation located here: http://msdn.microsoft.com/en-us/library/bb432507(v=VS.85).aspx My problem is that even though the callback properly executes, the code crashes when it gets to the end of the callback's execution. Here's the code in question. It's very simple, so I'm not sure why it's crashing: #include #include #include void SecurityCenterChangeOccurred(void *param) { printf("Change occurred!\n"); } int main() { HRESULT result = S_OK; HANDLE callbackRegistration = NULL; result = WscRegisterForChanges( NULL, &callbackRegistration, (LPTHREAD_START_ROUTINE)SecurityCenterChangeOccurred, NULL); while(1) { Sleep(100); } return 0; } My call stack looks like this when the crash occurs: 00faf6e8() ntdll.dll!_TppWorkerThread@4() + 0x1293 bytes kernel32.dll!@BaseThreadInitThunk@12() + 0x12 bytes ntdll.dll!___RtlUserThreadStart@8() + 0x27 bytes ntdll.dll!__RtlUserThreadStart@8() + 0x1b bytes If I add ExitThread(0); to the end of SecurityCenterChangeOccurred, I get an error and the following trace (So I don't think I should be using ExitThread): Unhandled exception at 0x7799852b (ntdll.dll) in WscRegisterForChangesCrash.exe: 0xC000071C: An invalid thread, handle %p, is specified for this operation. Possibly, a threadpool worker thread was specified. ntdll.dll!_TpCheckTerminateWorker@4() + 0x3ca2f bytes ntdll.dll!_RtlExitUserThread@4() + 0x30 bytes WscRegisterForChangesCrash.exe!SecurityCenterChangeOccurred(void * param=0x00000000) Line 8 + 0xa bytes C++ wscapi.dll!WorkItemWrapper() + 0x19 bytes ntdll.dll!_RtlpTpWorkCallback@8() + 0xdf bytes ntdll.dll!_TppWorkerThread@4() + 0x1293 bytes kernel32.dll!@BaseThreadInitThunk@12() + 0x12 bytes ntdll.dll!___RtlUserThreadStart@8() + 0x27 bytes ntdll.dll!__RtlUserThreadStart@8() + 0x1b bytes Does anyone have any ideas why this might be happening? To trigger the crash run the program and turn the firewall on or off.

    Read the article

  • Best practices for (over)using Azure queues

    - by John
    Hi, I'm in the early phases of designing an Azure-based application. One of the things that attracts me to Azure is the scalability, given the variability of the demand I'm likely to expect. As such I'm trying to keep things loosely coupled so I can add instances when I need to. The recommendations I've seen for architecting an application for Azure include keeping web role logic to a minimum, and having processing done in worker roles, using queues to communicate and some sort of back-end store like SQL Azure or Azure Tables. This seems like a good idea to me as I can scale up either or both parts of the application without any issue. However I'm curious if there are any best practices (or if anyone has any experiences) for when it's best to just have the web role talk directly to the data store vs. sending data by the queue? I'm thinking of the case where I have a simple insert to do from the web role - while I could set this up as a message, send it on the queue, and have a worker role pick it up and do the insert, it seems like a lot of double-handling. However I also appreciate that it may be the case that this is better in the long run, in case the web role gets overwhelmed or more complex logic ends up being required for the insert. I realise this might be a case where the answer is "it depends entirely on the situation, check your perf metrics" - but if anyone has any thoughts I'd be very appreciative! Thanks John

    Read the article

  • Creating a Contact form in Visual Studio ASPX and saving to an XML file when clicking SUBMIT

    - by user327137
    hey people hope all is well.. i am trying to create a form in VS using ASP that when upon submitting a form the details will get automatically stored in an xml file which can be accessed later on a chosen file save path i have 2 files ... "Contact.aspx" and "Contact.aspx.vb" i have created the form in the "Contact.aspx" and when trying to enter the fields in the "contact.aspx.vb" i keep getting several errors such as for example... Error 5 'Formatting' is not a member of 'System.Web.UI.WebControls.XmlBuilder' Error 6 'WriteStartDocument' is not a member of 'System.Web.UI.WebControls.XmlBuilder'. Error 7 'WriteComment' is not a member of 'System.Web.UI.WebControls.XmlBuilder'. Error 8 'WriteStartElement' is not a member of 'System.Web.UI.WebControls.XmlBuilder'. Error 10 'WriteAttributeString' is not a member of 'System.Web.UI.WebControls.XmlBuilder'. there is like 30 errors in total... im literally stuck out my head been trying for 2 days now and can't grasp what im doing wrong ive tried even some of the tutorials online but loads of errors... hope some1 can fix this thank you

    Read the article

  • Entity framework with Linq to Entities performance

    - by mare
    If I have a static method like this public static string GetTicClassificationTitle(string classI, string classII, string classIII) { using (TicDatabaseEntities ticdb = new TicDatabaseEntities()) { var result = from classes in ticdb.Classifications where classes.ClassI == classI where classes.ClassII == classII where classes.ClassIII == classIII select classes.Description; return result.FirstOrDefault(); } } and use this method in various places in foreach loops or just plain calling it numerous times, does it create and open new connection every time? If so, how can I tackle this? Should I cache the results somewhere, like in this case, I would cache the entire Classifications table in Memory Cache? And then do queries vs this cached object? Or should I make TicDatabaseEntities variable static and initialize it at class level? Should my class be static if it contains only static methods? Because right now it is not.. Also I've noticed that if I return result.First() instead of FirstOrDefault() and the query does not find a match, it will issue an exception (with FirstOrDefault() there is no exception, it returns null). Thank you for clarification.

    Read the article

  • Boost program will not working on Linux

    - by Martin Lauridsen
    Hi SOF, I have this program which uses Boost::Asio for sockets. I pretty much altered some code from the Boost examples. The program compiles and runs just like it should on Windows in VS. However, when I compile the program on Linux and run it, I get a Segmentation fault. I posted the code here The command I use to compile it is this: c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include -I/appl/htopopt/Linux_x86_64/boost_1_43_0/include mpqs.cpp mpqs_polynomial.cpp mpqs_host.cpp -o mpqs_host -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm -L/appl/htopopt/Linux_x86_64/boost_1_43_0/lib -lboost_system -lboost_thread -static -lpthread By commenting out code, I have found out that I get the Segmentation fault due to the following line: boost::asio::io_service io_service; Can anyone provide any assistance, as to what may be the problem (and the solution)? Thanks! Edit: I tried changing the program to a minimal example, using no other libraries or headers, just boost/asio.hpp: #define DEBUG 0 #include <boost/asio.hpp> int main(int argc, char* argv[]) { boost::asio::io_service io_service; return 0; } I also removed other library inclusions and linking on compilation, however this minimal example still gives me a segmentation fault.

    Read the article

  • UML diagrams that are actually pretty?

    - by Borek
    I'm looking for a diagramming software that would produce good looking output. It doesn't need to support everything (or even much) from UML, is doesn't need to have code engineering functions or anything, it just needs to produce visually interesting output. Here is a couple of samples of products that I consider ugly / not good enough: Visio with default UML stencils (didn't find better looking ones), Enterprise Architect, Dia, ArgoUML and many other "professional" UML tools. A couple of visually compelling tools that I considered (but found issues with): Visual Studio class diagrams - just for .NET classes but the output is miles better than what UML tools typically produce NClass - similar to VS's class diagrams but I could not find the "pretty", blue skin anywhere yuml.me - very nice but lacking some advanced layout options. I have to say that I find their style almost ideal for high-level diagrams - they look sketchy which is good. Balsamiq - I think Joel used this for hginit.com and I liked it. However, it's not suited for creating software diagrams so I can imagine it would be quite a lot of work MS Word has actually quite a good graphics engine but I'd rather leave this as a choice of the last resort I'd be grateful for any good tips.

    Read the article

  • Quick questions re moving to InfoPath forms

    - by sweissman
    Hi there: I’ve been asked to look into how best to move forms into InfoPath and have a couple of basic questions about your experiences so I can get an insider’s lay of the land. Even some short, quick bullets would be really helpful -- thank you! Are you starting from scratch in InfoPath, or are you converting from paper or a different e-format? (Jetform, PDF, etc.) Are you trying to re-create the layout of a specific paper form, or is a regular online form OK? (trying to learn what the latest thinking is about this) Do you need only simple fill and submit capabilities, or do you need programming for calculations, validation, database lookup/entry/reporting, etc. as well? (don’t know how much harder it is to do all this vs. not) How long does each form take to finish? (I know it depends, but is there a rough guideline for planning purposes?) Who’s doing the actual work? (by title or function) What is especially straightforward or challenging about moving to InfoPath forms? (forewarned is forearmed!)

    Read the article

  • Short file names versus long file names in Windows

    - by normski
    I have some code which gets the short name from a file path, using GetShortNameW(), and then later retrieves the long name view GetLongNameA(). The original file is of the form "C:/ProgramData/My Folder/File.ext" However, following conversion to short, then back to long, the filename becomes "C:/Program Files/My Folder/Filename.ext". The short name is of the form "C:/PROGRA~2/MY_FOL~1/FIL~1.EXT" The short name is being incorrectly resolved. The code compiles using VS 2005 on Windows 7 (I cannot upgrade the project to VS2008) Does anybody have any idea why this might be happening? DWORD pathLengthNeeded = ::GetShortPathNameW(aRef->GetFilePath().c_str(), NULL, 0); if(pathLengthNeeded != 0) { WCHAR* shortPath = new WCHAR[pathLengthNeeded]; DWORD newPathNameLength = ::GetShortPathNameW(aRef->GetFilePath().c_str(), shortPath, pathLengthNeeded); if(newPathNameLength != 0) { UI_STRING unicodePath(shortPath); std::string asciiPath = StringFromUserString(unicodePath); pathLengthNeeded = ::GetLongPathNameA(asciiPath.c_str(),NULL, 0); if(pathLengthNeeded != 0) {// convert it back to a long path if possible. For goodness sake can't we use Unicode throughout?F char* longPath = new char[pathLengthNeeded]; DWORD newPathNameLength = ::GetLongPathNameA(asciiPath.c_str(), longPath, pathLengthNeeded); if(newPathNameLength != 0) { std::string longPathString(longPath, newPathNameLength); asciiPath = longPathString; } delete [] longPath; } SetFullPathName(asciiPath); } delete [] shortPath; }

    Read the article

  • lapply slower than for-loop when used for a BiomaRt query. Is that expected?

    - by ptocquin
    I would like to query a database using BiomaRt package. I have loci and want to retrieve some related information, let say description. I first try to use lapply but was surprise by the time needed for the task to be performed. I thus tried a more basic for-loop and get a faster result. Is that expected or is something wrong with my code or with my understanding of apply ? I read other posts dealing with *apply vs for-loop performance (Here, for example) and I was aware that improved performance should not be expected but I don't understand why performance here is actually lower. Here is a reproducible example. 1) Loading the library and selecting the database : library("biomaRt") athaliana <- useMart("plants_mart_14") athaliana <- useDataset("athaliana_eg_gene",mart=athaliana) 2) Querying the database : loci <- c("at1g01300", "at1g01800", "at1g01900", "at1g02335", "at1g02790", "at1g03220", "at1g03230", "at1g04040", "at1g04110", "at1g05240" ) I create a function for the use in lapply : foo <- function(loci) { getBM("description","tair_locus",loci,athaliana) } When I use this function on the first element : > system.time(foo(cwp_loci[1])) utilisateur système écoulé 0.020 0.004 1.599 When I use lapply to retrieve the data for all values : > system.time(lapply(loci, foo)) utilisateur système écoulé 0.220 0.000 16.376 I then created a new function, adding a for-loop : foo2 <- function(loci) { for (i in loci) { getBM("description","tair_locus",loci[i],athaliana) } } Here is the result : > system.time(foo2(loci)) utilisateur système écoulé 0.204 0.004 10.919 Of course, this will be applied to a big list of loci, so the best performing option is needed. I thank you for assistance. EDIT Following recommendation of @MartinMorgan Simply passing the vector loci to getBM greatly improves the query efficiency. Simpler is better. > system.time(lapply(loci, foo)) utilisateur système écoulé 0.236 0.024 110.512 > system.time(foo2(loci)) utilisateur système écoulé 0.208 0.040 116.099 > system.time(foo(loci)) utilisateur système écoulé 0.028 0.000 6.193

    Read the article

  • File/Property rename problem in Visual Studio and Explorer

    - by user211377
    I am running Windows 7. In Visual Studio, if I try to rename a file by right-click/rename, it behaves as normal for a couple of seconds, then switches out of edit mode. A similar problem occurs when I try to change a property, for example the name of a control. When I click in the property value, I can start editing, but then it assumes the edit is complete, and if I continue typing it overwrites the text. It does this every couple of seconds, so, for example, if I want to name a control mnuFile, I might get mn, then uFi, then le. S, the control ends upgetting called whatever I typed in the last 2-3 characters. I have the same problem with file rename in Explorer. Looks to me as though some timeout is kicking in and terminating the edit. Well, I was going to try a 'Repair install', but that's not an option in Windows 7! So, I went through the re-install, up to the point where I thought is was going to trash my install, and then cancelled it! By some miracle, that has fixed the problem!#Thanks for the advice about ShellExView, I'll try that next time it happens. Thanks for the answers guys! In my view it is more a Visual Studio issue, since it affects both file renames and properties in VS. In Explorer it only affects file rename, which is (just slightly) less annoying!

    Read the article

  • What are the benefits and risks of moving to a Model Driven Architecture approach?

    - by Tone
    I work for a company with about 350 employees and we are in the process of growing. Our current codebase is not structured very well and we are looking both at how to improve it immediately (by organizing objects into namespaces, separating concerns, etc.) and moving to a model driven architecture approach, where we model and design everything first with uml, then generate code from that model. We have been looking heavily at Sparx Systems Enterprise Architect (EA) (which is UML 2.0 capable) and we are also considering the tools in VS 2010. I know there are other tools out there (Rational XDE being one) but I really do not think we can spend $1500+ per license at this point. I'm not looking for answers on which tool is better than another but more for experiences moving from a cowboy coding environment (that is, little planning and design, just jump in and start coding) to a model driven architecture. Looking back was it helpful to your organization? What are the pain points? What are the risks? What are the benefits?

    Read the article

  • C++ struct, public data members and inheritance

    - by Marius
    Is it ok to have public data members in a C++ class/struct in certain particular situations? How would that go along with inheritance? I've read opinions on the matter, some stated already here http://stackoverflow.com/questions/952907/practices-on-when-to-implement-accessors-on-private-member-variables-rather-than http://stackoverflow.com/questions/670958/accessors-vs-public-members or in books/articles (Stroustrup, Meyers) but I'm still a little bit in the shade. I have some configuration blocks that I read from a file (integers, bools, floats) and I need to place them into a structure for later use. I don't want to expose these externally just use them inside another class (I actually do want to pass these config parameters to another class but don't want to expose them through a public API). The fact is that I have many such config parameters (15 or so) and writing getters and setters seems an unnecessary overhead. Also I have more than one configuration block and these are sharing some of the parameters. Making a struct with all the data members public and then subclassing does not feel right. What's the best way to tackle that situation? Does making a big struct to cover all parameters provide an acceptable compromise (I would have to leave some of these set to their default values for blocks that do not use them)?

    Read the article

  • Refactoring routes - serving different layouts

    - by dmclark
    As a Rails NOOB, I started with a routes.rb of: ActionController::Routing::Routes.draw do |map| map.resources :events map.connect 'affiliates/list', :controller => "affiliates", :action => "list" map.connect 'affiliates/regenerate_thumb/:id', :controller => "affiliates", :action => "regenerate_thumb" map.connect 'affiliates/state/:id.:format', :controller => "affiliates", :action => "find_by_state" map.connect 'affiliates/getfeed', :controller => "affiliates", :action => "feed" map.resources :affiliates, :has_many => :events map.connect ":controller/:action" map.connect '', :controller => "affiliates" map.connect ":controller/:action/:id" map.connect ":controller/:action/:id/:format" end and i'm trying to tighten it up. and I've gotten as far as: ActionController::Routing::Routes.draw do |map| map.resources :events, :only => "index" map.resources :affiliates do |affiliates| affiliates.resources :has_many => :events affiliates.resources :collection => { :list => :get, :regenerate_thumb => "regenerate_thumb" } end # map.connect 'affiliates/regenerate_thumb/:id', :controller => "affiliates", :action => "regenerate_thumb" map.connect 'affiliates/state/:id.:format', :controller => "affiliates", :action => "find_by_state" map.connect 'affiliates/getfeed', :controller => "affiliates", :action => "feed" map.root :affiliates end what is confusing to me is routes vs parameters.. For example, I realized that the only difference between list and index is HOW it is rendered, rather than WHAT is rendered. Having a different action (as I do now) feels wrong but I can't figure out he right way. Thanks

    Read the article

  • Issue displaying a local image from XAML

    - by Flack
    Hello, I have the below simple xaml: <Window x:Class="WpfApplication1.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Grid> <Image Source="happyface.jpg"/> </Grid> </Window> happyface.jpg is included in the project and its Build Action is set to "Content" and Copy To Ouptput Directory is set to "Copy Always". When looking at the app through the VS designer, everything is ok and I see the image. However, when I run the app, no image is displayed. I see the image is copied to the out out directory. If I put in the entire path as the source (C:\SANDBOX\WpfApplication1\WpfApplication1\bin\Debug") it works. Any ideas as to why the image is not displayed when I run the app? I read about pack URIs but thought that to just simply reference a loose image in the current directory, I can just use the image name. Thank you.

    Read the article

  • Deterministic floating point and .NET

    - by code2code
    How can I guarantee that floating point calculations in a .NET application (say in C#) always produce the same bit-exact result? Especially when using different versions of .NET and running on different platforms (x86 vs x86_64). Inaccuracies of floating point operations do not matter. In Java I'd use strictfp. In C/C++ and other low level languages this problem is essentially solved by accessing the FPU / SSE control registers but that's probably not possible in .NET. Even with control of the FPU control register the JIT of .NET will generate different code on different platforms. Something like HotSpot would be even worse in this case... Why do I need it? I'm thinking about writing a real-time strategy (RTS) game which heavily depends on fast floating point math together with a lock stepped simulation. Essentially I will only transmit user input across the network. This also applies to other games which implement replays by storing the user input. Not an option are: decimals (too slow) fixed point values (too slow and cumbersome when using sqrt, sin, cos, tan, atan...) update state across the network like an FPS: Sending position information for hundreds or a few thousand units is not an option Any ideas?

    Read the article

  • RegQueryValueEx not working with a Release version but working fine with Debug

    - by Nux
    Hi. I'm trying to read some ODBC details form a registry and for that I use RegQueryValueEx. The problem is when I compile the release version it simply cannot read any registry values. The code is: CString odbcFuns::getOpenedKeyRegValue(HKEY hKey, CString valName) { CString retStr; char *strTmp = (char*)malloc(MAX_DSN_STR_LENGTH * sizeof(char)); memset(strTmp, 0, MAX_DSN_STR_LENGTH); DWORD cbData; long rret = RegQueryValueEx(hKey, valName, NULL, NULL, (LPBYTE)strTmp, &cbData); if (rret != ERROR_SUCCESS) { free(strTmp); return CString("?"); } strTmp[cbData] = '\0'; retStr.Format(_T("%s"), strTmp); free(strTmp); return retStr; } I've found a workaround for this - I disabled Optimization (/Od), but it seems strange that I needed to do that. Is there some other way? I use Visual Studio 2005. Maybe it's a bug in VS? Almost forgot - the error code is 2 (as the key wouldn't be found).

    Read the article

  • Is encrypting session id (or other authenticate value) in cookie useful at all?

    - by Ji
    In web development, when session state is enabled, a session id is stored in cookie(in cookieless mode, query string will be used instead). In asp.net, the session id is encrypted automatically. There are plenty of topics on the internet regarding how you should encrypt your cookie, including session id. I can understand why you want to encrypt private info such as DOB, but any private info should not be stored in cookie at first place. So for other cookie values such as session id, what is the purpose encryption? Does it add security at all? no matter how you secure it, it will be sent back to server for decryption. Be be more specific, For authentication purpose, turn off session, i don't want to deal with session time out any more store some sort of id value in the cookie, on the server side, check if the id value exists and matches, if it is, authenticate user. let the cookie value expire when browser session is ended, this way. vs Asp.net form authentication mechanism (it relies on session or session id, i think) does latter one offer better security?

    Read the article

< Previous Page | 818 819 820 821 822 823 824 825 826 827 828 829  | Next Page >