Search Results

Search found 3987 results on 160 pages for 'captain obvious'.

Page 126/160 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Can't get KnownType to work with WCF

    - by Kelly Cline
    I have an interface and a class defined in separate assemblies, like this: namespace DataInterfaces { public interface IPerson { string Name { get; set; } } } namespace DataObjects { [DataContract] [KnownType( typeof( IPerson ) ) ] public class Person : IPerson { [DataMember] public string Name { get; set; } } } This is my Service Interface: public interface ICalculator { [OperationContract] IPerson GetPerson ( ); } When I update my Service Reference for my Client, I get this in the Reference.cs: public object GetPerson() { return base.Channel.GetPerson(); I was hoping that KnownType would give me IPerson instead of "object" here. I have also tried [KnownType( typeof( Person ) ) ] with the same result. I have control of both client and server, so I have my DataObjects (where Person is defined) and DataInterfaces (where IPerson is defined) assemblies in both places. Is there something obvious I am missing? I thought KnownType was the answer to being able to use interfaces with WCF. ----- FURTHER INFORMATION ----- I removed the KnownType from the Person class and added [ServiceKnownType( typeof( Person ) ) ] to my service interface, as suggested by Richard. The client-side proxy still looks the same, public object GetPerson() { return base.Channel.GetPerson(); , but now it doesn't blow up. The client just has an "object", though, so it has to cast it to IPerson before it is useful. var person = client.GetPerson ( ); Console.WriteLine ( ( ( IPerson ) person ).Name );

    Read the article

  • Difference in BackgroundWorker thread access in VS2010 / .NET 4.0?

    - by Jonners
    Here's an interesting one - in VS2005 / VS2008 running against .NET 2.0 / .NET 3.0 / .NET 3.5, a BackgroundWorker thread may not directly update controls on a WinForms form that initiated that thread - you'll get a System.InvalidOperationException out of the BackgroundWorker stating "Cross-thread operation not valid: Control 'thecontrol' accessed from a thread other than the thread it was created on". I remember hitting this back in 2004 or so when I first started writing .NET WinForms apps with background threads. There are several ways around the problem - this is not a question asking for that answer. I've been told recently that this kind of operation is now allowed when written in VS2010 / .NET 4.0. This seems unlikely to me, since this kind of object access restriction has always been a fairly fundamental part of thread-safe programming. Allowing a BackgroundWorker thread direct access to objects owned not by itself but by its parent UI form seems contrary to that principle. A trawl through the .NET 4.0 docs hasn't revealed any obvious changes that could account for this behaviour. I don't have VS2010 / .NET 4.0 to test this out - does anyone who has access to that toolset know for sure whether the model has changed to allow that kind of thread interaction? I'd like to either take advantage of it in future, or deploy the cluestick. ;)

    Read the article

  • Optimizing a bin-placement algorithm

    - by user258651
    Alright, I've got two collections, and I need to place elements from collection1 into the bins (elements) of collection2, based on whether their value falls within a given bin's range. For a concrete example, assume I have a sorted collection objects (bins) which have an int range ([1...4], [5..10], etc). I need to determine the range an int falls in, and place it in the appropriate bin. foreach(element n in collection1) { foreach(bin m in collection2) { if (m.inRange(n)) { m.add(n); break; } } } So the obvious NxM complexity algorithm is there, but I really would like to see Nxlog(M). To do this I'd like to use BinarySearch in place of the inner foreach loop. To use BinarySearch, I need to implement an IComparer class to do the searching for me. The problem I'm running into is this approach would require me to make an IComparer.Compare function that compares two different types of objects (an element to its bin), and that doesn't seem possible or correct. So I'm asking, how should I write this algorithm?

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Efficiently sending protocol buffer messages with http on an android platform

    - by Ben Griffiths
    I'm trying to send messages generated by Google Protocol Buffer code via a simple HTTP scheme to a server. What I have currently have implemented is here (forgive the obvious incompletion...): HttpClient client = new DefaultHttpClient(); String url = "http://192.168.1.69:8888/sdroidmarshal"; HttpPost postRequest = new HttpPost(url); String proto = offers.build().toString(); List<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(1); nameValuePairs.add(new BasicNameValuePair("sdroidmsg", proto)); postRequest.setEntity(new UrlEncodedFormEntity(nameValuePairs)); try { ResponseHandler<String> responseHandler = new BasicResponseHandler(); String responseBody = client.execute(postRequest, responseHandler); } catch (Throwable t) { } I'm not that experienced with communications over the internet and no more so with HTTP - while I do understand the basics... So my question, before I blindly develop the rest of the application around this, is whether or not this is particularly efficient? I ideally would like to keep messages small and I assume toString() adds some unnecessary formatting.

    Read the article

  • Difficulty porting raw PCM output code from Java to Android AudioTrack API.

    - by IndigoParadox
    I'm attempting to port an application that plays chiptunes (NSF, SPC, etc) music files from Java SE to Android. The Android API seems to lack the javax multimedia classes that this application uses to output raw PCM audio. The closest analog I've found in the API is AudioTrack and so I've been wrestling with that. However, when I try to run one of my sample music files through my port-in-progress, all I get back is static. My suspicion is that it's the AudioTrack I've setup which is at fault. I've tried various different constructors but it all just outputs static in the end. The DataLine setup in the original code is something like: AudioFormat audioFormat = new AudioFormat( AudioFormat.Encoding.PCM_SIGNED, 44100, 16, 2, 4, 44100, true ); DataLine.Info lineInfo = new DataLine.Info( SourceDataLine.class, audioFormat ); DataLine line = (SourceDataLine)AudioSystem.getLine( lineInfo ); The constructor I'm using right now is: AudioTrack = new AudioTrack( AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT, AudioTrack.getMinBufferSize( 44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT ), AudioTrack.MODE_STREAM ); I've replaced constants and variables in those so they make sense as concisely as possible, but my basic question is if there are any obvious problems in the assumptions I made when going from one format to the other.

    Read the article

  • How to code a 'Next in Results' within search results in PHP

    - by thebluefox
    Right, bit of a head scratcher, although I've got a feeling there's an obvious answer and I'm just not seeing the wood for the trees. Baiscally, using Solr as a search engine for my site, bringing back 15 results per page. When you click on a result, you get a detail page, that has a "Next in Results" link on it, which obviously forwards you on to the next result. Whats the best way of doing this? I've come up with a few solutions but they're either too inpractical or just don't work. I could store all the ids in a session array, then grab the one after the current one and put that in the link. But with possibly hundreds/thousands of results, the memory that array would need, and the performance hit of dealing with it isn't practical. I could take the same approach and put it into the db, but I'll still have to deal with a potentially huge array when I grab them out of the db. Or; I could do the search again, only returning the id's, and grab the one after the one we're currently looking at. I think this could be the best option? Although it does seem kind of messy, namely because of when I have to select the id thats on a different 'page' (ie the 16th, 31st etc result). Unless I pass through where it was in the results, and select from there, but that still doesn't seem like the right way to do it. I'm really sorry if this is just complete nonsense, any help is massively appreciated as always, Cheers guys!

    Read the article

  • How to generate XML from an Excel VBA macro?

    - by SuperNES
    So, I've got a bunch of content that was delivered to us in the form of Excel spreadsheets. I need to take that content and push it into another system. The other system takes its input from an XML file. I could do all of this by hand (and trust me, management has no problem making me do that!), but I'm hoping there's an easy way to write an Excel macro that would generate the XML I need instead. This seems like a better solution to me, as this is a job that will need to be repeated regularly (we'll be getting a LOT of content in Excel sheets) and it just makes sense to have a batch tool that does it for us. However, I've never experimented with generating XML from Excel spreadsheets before. I have a little VBA knowledge but I'm a newbie to XML. I guess my problem in Googling this is that I don't even know what to Google for. Can anyone give me a little direction to get me started? Does my idea sound like the right way to approach this problem, or am I overlooking something obvious? Thanks StackOverflow!

    Read the article

  • Multi-threaded library calls in ASP.NET page request.

    - by ProfK
    I have an ASP.NET app, very basic, but right now too much code to post if we're lucky and I don't have to. We have a class called ReportGenerator. On a button click, method GenerateReports is called. It makes an async call to InternalGenerateReports using ThreadPool.QueueUserWorkItem and returns, ending the ASP.NET response. It doesn't provide any completion callback or anything. InternalGenerateReports creates and maintains five threads in the threadpool, one report per thread, also using QueueUserWorkItem, by 'creating' five threads, also with and waiting until calls on all of them complete, in a loop. Each thread uses an ASP.NET ReportViewer control to render a report to HTML. That is, for 200 reports, InternalGenerateReports should create 5 threads 40 times. As threads complete, report data is queued, and when all five have completed, report data is flushed to disk. My biggest problems are that after running for just one report, the aspnet process is 'hung', and also that at around 200 reports, the app just hangs. I just simplified this code to run in a single thread, and this works fine. Before we get into details like my code, is there anything obvious in the above scendario that might be wrong?

    Read the article

  • Why can I call a non-const member function pointer from a const method?

    - by sdg
    A co-worker asked about some code like this that originally had templates in it. I have removed the templates, but the core question remains: why does this compile OK? #include <iostream> class X { public: void foo() { std::cout << "Here\n"; } }; typedef void (X::*XFUNC)() ; class CX { public: explicit CX(X& t, XFUNC xF) : object(t), F(xF) {} void execute() const { (object.*F)(); } private: X& object; XFUNC F; }; int main(int argc, char* argv[]) { X x; const CX cx(x,&X::foo); cx.execute(); return 0; } Given that CX is a const object, and its member function execute is const, therefore inside CX::execute the this pointer is const. But I am able to call a non-const member function through a member function pointer. Are member function pointers a documented hole in the const-ness of the world? What (presumably obvious to others) issue have we missed?

    Read the article

  • My chance to shape our development process/policy

    - by Matt Luongo
    Hey guys, I'm sorry if this is a duplicate, but the question search terms are pretty generic. I work at a small(ish) development firm. I say small, but the company is actually a fair size; however, I'm only the second full-time developer, as most past work has been organized around contractors. I'm in a position to define internal project process and policy- obvious stuff like SCM and unit-testing. Methodology is outside the scope of the document I'm putting together, but I'd really like to push us in a leaner (and maybe even Agile?) direction. I feel like I have plenty of good practice recommendations, but not enough solid motivation to make my document the spirit guide I'd like it to be. I've separated the document into "principles" and "recommendations". Recommendations have been easy to come up with. Use SCM, strive for 1-step, regularly scheduled builds, unit test first, document as you go... Listing the principles that are supposed to be informing these recommendations, though, has been rough. I've come up with "tools work for us; we should never work for tools" and a hazy clause aimed at our QA (which has been overly manual) that I'd like to read "tedium is the root of all evil". I don't want to miss an opportunity with this document to give us a good in-house start and maybe even push us toward Agile. What principles am I missing?

    Read the article

  • Can't seem to sign AWS Cloud Front URL properly

    - by Joe Corkery
    Hi everybody, I found a lot of detailed examples online on how to sign an Amazon CloudFront URL for private content. Unfortunately, whenever I implement these examples my URL doesn't seem to work. The resource path is correct because I can download the file when it is set for world read, but the URL doesn't work when set just for authorized users. The PHP code I am using is below. If anybody has any insights as to what I might be doing wrong (I'm guessing it is something obvious that I am just not seeing right now), it would be greatly appreciated. function urlCloudFront($resource) { $AWS_CF_KEY = 'APKA...'; $priv_key = file_get_contents(path_to_pem_file); $pkeyid = openssl_get_privatekey($priv_key); $expires = strtotime("+ 3 hours"); $policy_str = '{"Statement":[{"Resource":"'.$resource.'","Condition":{"DateLessThan":{"AWS:EpochTime":'.$expires.'}}}]}'; $policy_str = trim( preg_replace( '/\s+/', '', $policy_str ) ); $res = openssl_sign($policy_str, $signature, $pkeyid, OPENSSL_ALGO_SHA1); $signature_base64 = (base64_encode($signature)); $repl = array('+' => '-','=' => '_','/' => '~'); $signature_base64 = strtr($signature_base64,$repl); $url = $resource . '?Expires=' .$expires. '&Signature=' . $signature_base64 . '&Key-Pair-Id='. $AWS_CF_KEY; print '<p><A href="' .$url. '">Download VIDA (CloudFrount)</A>'; } urlCloudFront("http://mydistcloud.cloudfront.net/mydir/myfile.tar.gz"); Thanks.

    Read the article

  • boost::filesystem - how to create a boost path from a windows path string on posix plattforms?

    - by VolkA
    I'm reading path names from a database which are stored as relative paths in Windows format, and try to create a boost::filesystem::path from them on a Unix system. What happens is that the constructor call interprets the whole string as the filename. I need the path to be converted to a correct Posix path as it will be used locally. I didn't find any conversion functions in the boost::filesystem reference, nor through google. Am I just blind, is there an obvious solution? If not, how would you do this? Example: std::string win_path("foo\\bar\\asdf.xml"); std::string posix_path("foo/bar/asdf.xml"); // loops just once, as part is the whole win_path interpreted as a filename boost::filesystem::path boost_path(win_path); BOOST_FOREACH(boost::filesystem::path part, boost_path) { std::cout << part << std::endl; } // prints each path component separately boost::filesystem::path boost_path_posix(posix_path); BOOST_FOREACH(boost::filesystem::path part, boost_path_posix) { std::cout << part << std::endl; }

    Read the article

  • How to run setInterval() on multiple canvases simultaneously?

    - by Alex
    I have a page which has several <canvas> elements. I am passing the canvas ID and an array of data to a function which then grabs the canvas info and passes the data onto a draw() function which in turn processes the given data and draws the results onto the canvas. So far, so good. Example data arrays; $(function() { setup($("#canvas-1"), [[110,110,100], [180,180,50], [220,280,80]]); setup($("#canvas-2"), [[110,110,100], [180,180,50], [220,280,80]]); }); setup function; function setup(canvas, data) { ctx = canvas[0].getContext('2d'); var i = data.length; var dimensions = { w : canvas.innerWidth(), h : canvas.innerHeight() }; draw(dimensions, data, i); } This works perfectly. draw() runs and each canvas is populated. However - I need to animate the canvas. As soon as I replace line 8 of the above example; draw(dimensions, data, i); with setInterval( function() { draw(dimensions, data, i); }, 33 ); It stops working and only draws the last canvas (with the others remaining blank). I'm new to both javascript and canvas so sorry if this is an obvious one, still feeling my way around. Guidance in the right direction much appreciated! Thanks.

    Read the article

  • Visual Studio hangs when deploying a cube

    - by Richie
    Hello All, I'm having an issue with an Analysis Services project in Visual Studio 2005. My project always builds but only occasionally deploys. No errors are reported and VS just hangs. This is my first Analysis Services project so I am hoping that there is something obvious that I am just missing. Here is the situation I have a cube that I have successfully deployed. I then make some change, e.g., adding a hierarchy to a dimension. When I try to deploy again VS hangs. I have to restart Analysis Services to regain control of VS so I can shut it down. I restart everything sometimes once, sometimes twice or more before the project will eventually deploy. This happens with any change I make there seems to be no pattern to this behaviour. Sometimes I have to delete the cube from Analysis Services before restarting everything to get a successful deploy. Also I have successfully deployed the cube, and then subsequently successfully reprocessed a dimension then when I open a query window in SQL Server Management Studio it says that it can find any cubes. As a test I have deployed a cube successfully. I have then deleted it in Analysis Services and attempted to redeploy it, without making any changes to the cube, only to have the same behaviour mentioned above. VS just hangs with no reason so I have no idea where to start hunting down the problem. It is taking 15-20 minutes to make a change as simple as setting the NameColumn of a dimension attribute. As you can imagine this is taking hours of my time so I would greatly appreciate any assistance anyone can give me.

    Read the article

  • How to make HTML layout whitespace-agnostic?

    - by ssg
    If you have consecutive inline-blocks white-space becomes significant. It adds some level of space between elements. What's the "correct" way of avoiding whitespace effect to HTML layout if you want those blocks to look stuck to each other? Example: <span>a</span> <span>b</span> This renders differently than: <span>a</span><span>b</span> because of the space inbetween. I want whitespace-effect to go away without compromising HTML source code layout. I want my HTML templates to stay clean and well-indented. I think these options are ugly: 1) Tweaking text-indent, margin, padding etc. (Because it would be dependent on font-size, default white-space width etc) 2) Putting everything on a single line, next to each other. 3) Zero font-size. That would require overriding font-size in blocks, which would otherwise be inherited. 4) Possible document-wide solutions. I want the solution to stay local for a certain block of HTML. Any ideas, any obvious points which I'm missing?

    Read the article

  • CakePHP 1.26: Bug in 'Security' component?

    - by Steve
    Okay, for those of you who may have read this earlier, I've done a little research and completely revamped my question. I've been having a problem where my form requests get blackholed by the Security component, although everything works fine when the Security component is disabled. I've traced it down to a single line in a form: <?php echo $form->create('Audition');?> <fieldset> <legend><?php __('Edit Audition');?></legend> <?php echo $form->input('ensemble'); echo $form->input('position'); echo $form->input('aud_date'); // The following line works fine... echo $form->input('owner'); // ...but the following line blackholes when Security included // and the form is submitted: // echo $form->input('owner', array('disabled'=>'disabled'); ?> </fieldset> <?php echo $form->end('Submit');?> (I've commented out the offending line for clarity) I think I'm following the rules by using the form helper; as far as I can tell, this is a bug in the Security component, but I'm too much of a CakePHP n00b to know for sure. I'd love to get some feedback, and if it's a real bug, I'll submit it to the CakePHP team. I'd also love to know if I'm just being dumb and missing something obvious here.

    Read the article

  • most efficient method of turning multiple 1D arrays into columns of a 2D array

    - by Ty W
    As I was writing a for loop earlier today, I thought that there must be a neater way of doing this... so I figured I'd ask. I looked briefly for a duplicate question but didn't see anything obvious. The Problem: Given N arrays of length M, turn them into a M-row by N-column 2D array Example: $id = [1,5,2,8,6] $name = [a,b,c,d,e] $result = [[1,a], [5,b], [2,c], [8,d], [6,e]] My Solution: Pretty straight forward and probably not optimal, but it does work: <?php // $row is returned from a DB query // $row['<var>'] is a comma separated string of values $categories = array(); $ids = explode(",", $row['ids']); $names = explode(",", $row['names']); $titles = explode(",", $row['titles']); for($i = 0; $i < count($ids); $i++) { $categories[] = array("id" => $ids[$i], "name" => $names[$i], "title" => $titles[$i]); } ?> note: I didn't put the name = value bit in the spec, but it'd be awesome if there was some way to keep that as well.

    Read the article

  • Why does SQLite take such a long time to fetch the data?

    - by Derk
    I have two possible queries, both giving the result set I want. Query one takes about 30ms, but 150ms to fetch the data from the database. SELECT id FROM featurevalues as featval3 WHERE featval3.feature IN (?,?,?,?) AND EXISTS ( SELECT 1 FROM product_to_value, product_to_value as prod2, features, featurevalues WHERE product_to_value.feature = features.int AND product_to_value.value = featurevalues.id AND features.id = ? AND featurevalues.id IN (?,?) AND product_to_value.product = prod2.product AND prod2.value = featval3.id ) Query two takes about 3ms -this is the one I therefore prefer-, but also takes 170ms to fetch the data. SELECT ( SELECT prod2.value FROM product_to_value, product_to_value as prod2, features, featurevalues WHERE product_to_value.feature = features.int AND product_to_value.value = featurevalues.id AND features.id = ? AND featurevalues.id IN (?,?) AND product_to_value.product = prod2.product AND prod2.value = featval3.id ) as id FROM featurevalues as featval3 WHERE featval3.feature IN (?,?,?,?) The 170ms seems to be related to the number of rows from table featval3. After an index is used on featval3.feature IN (?,?,?,?), 151 items "remain" in featval3. Is there something obvious I am missing regarding the slow fetching? As far as I know everything is properly indexed.. I am confused because the second query only takes a blazing 3ms to run.

    Read the article

  • Is there a way to transfrom a list of key/value pairs into a data transfer object

    - by weevie
    ...apart from the obvious looping through the list and a dirty great case statement! I've turned over a few Linq queries in my head but nothing seems to get anywhere close. Here's the an example DTO if it helps: class ClientCompany { public string Title { get; private set; } public string Forenames { get; private set; } public string Surname { get; private set; } public string EmailAddress { get; private set; } public string TelephoneNumber { get; private set; } public string AlternativeTelephoneNumber { get; private set; } public string Address1 { get; private set; } public string Address2 { get; private set; } public string TownOrDistrict { get; private set; } public string CountyOrState { get; private set; } public string PostCode { get; private set; } } We have no control over the fact that we're getting the data in as KV pairs, I'm afraid.

    Read the article

  • git rebase onto remote updates

    - by Blake Chambers
    I work with a small team that uses git for source cod management. Recently, we have been doing topic branches to keep track of features then merging them into master locally then pushing them to a central git repository on a remote server. This works great when no changes have been made in master: I create my topic branch, commit it, merge it into master, then push. Hooray. However, if someone has pushed to origin before i do, my commits are not fast-forward. Thus a merge commit ensues. This also happens when a topic branch needs to merge with master locally to ensure my changes work with the code as of now. So, we end up with merge commits everywhere and a git log rivaling a friendship bracelet. So, rebasing is the obvious choice. What I would like is to: create topic branches holding several commits checkout master and pull (fast-forward because i haven't committed to master) rebase topic branches onto the new head of master rebase topics against master(so the topics start at masters head), bringing master up to my topic head My way of doing this currently is listed below: git checkout master git rebase master topic_1 git rebase topic_1 topic_2 git checkout master git rebase topic_2 git branch -d topic_1 topic_2 Is there a faster way to do this?

    Read the article

  • Scala implicit dynamic casting

    - by weakwire
    I whould like to create a scala Views helper for Android Using this combination of trait and class class ScalaView(view: View) { def onClick(action: View => Any) = view.setOnClickListener(new OnClickListener { def onClick(view: View) { action(view) } }) } trait ScalaViewTrait { implicit def view2ScalaView(view: View) = new ScalaView(view) } I'm able to write onClick() like so class MainActivity extends Activity with ScalaViewTrait { //.... val textView = new TextView(this) textView.onClick(v => v.asInstanceOf[TextView] setText ("asdas")) } My concern is that i want to avoid casting v to TextView v will always be TextView if is applied to a TextView LinearLayout if applied to LinearLayout and so on. Is there any way that v gets dynamic casted to whatever view is applied? Just started with Scala and i need your help with this. UPDATE With the help of generics the above get's like this class ScalaView[T](view: View) { def onClick(action: T => Any) = view.setOnClickListener(new OnClickListener { def onClick(view: View) { action(view.asInstanceOf[T]) } }) } trait ScalaViewTrait { implicit def view2ScalaView[T](view: View) = new ScalaView[T](view) } i can write onClick like this view2ScalaView[TextView](textView) .onClick(v => v.setText("asdas")) but is obvious that i don't have any help from explicit and i moved the problem instead or removing it

    Read the article

  • Changing where a resource is pulled during runtime?

    - by Brandon
    I have a website that goes out to multiple clients. Sometimes a client will insist on minor changes. For reasons beyond my control, I have to comply no matter how minor the request. Usually this isn't a problem, I would just create a client specific version of the user control or page and overwrite the default one during build time or make a configuration setting to handle it. Now that I am localizing the site, I'm curious about the best way to go about making minor wording changes. Lets say I have a resource file called Resources.resx that has 300 resources in it. It has a resource called Continue. English value is "Continue", the French value is "Continuez". Now one client, for whatever reason, wants it to say "Next" and "Après" and the others want to keep it the same. What is the best way to accomodate a request like this? (This is just a simple example). The only two ways I can think of is to Create another Resources.resx specific to the client, and replace the .dll during build time. Since I'd be completely replacing the dll, the new resource file would have to contain all 300 strings. The obvious problem being that I now have 2 resource files, each with 300 strings to maintain. Create a custom user control/page and change it to use a custom resource file. e.g. SignIn.ascx would be replaced during the build and it would pull its resources from ClientName.resx instead of Resources.resx. Are there any other things I could try? Is there any way to change it so that the application will always look in a ClientResources.resx file for the overridden values before actually look at the specified resource file?

    Read the article

  • Serialized NHibernate Configuration objects - detect out of date or rebuild on demand?

    - by fostandy
    I've been using serialized nhibernate configuration objects (also discussed here and here) to speed up my application startup from about 8s to 1s. I also use fluent-nhibernate, so the path is more like ClassMap class definitions in code fluentconfiguration xml nhibernate configuration configuration serialized to disk. The problem from doing this is that one runs the risk of out of date mappings - if I change the mappings but forget to rebuild the serialized configuration, then I end up using the old mappings without realising it. This does not always result in an immediate and obvious error during testing, and several times the misbehaviour has been a real pain to detect and fix. Does anybody have any idea how I would be able to detect if my classmaps have changed, so that I could either issue an immediate warning/error or rebuild it on demand? At the moment I am comparing timestamps on my compiled assembly against the serialized configuration. This will pickup mapping changes, but unfortunately it generates a massive false positive rate as ANY change to the code results in an out of date flag. I can't move the classmaps to another assembly as they are tightly integrated into the business logic. This has been niggling me for a while so I was wondering if anybody had any suggestions?

    Read the article

  • HTTP Compression problems on IIS7

    - by Jonathan Wood
    I've spent quite a bit of time on this but seem to be going nowhere. I have a large page that I really want to speed up. The obvious place to start seems to be HTTP compression, but I just can't seem to get it to work for me. After considerable searching, I've tried several variations of the code below. It kind of works, but after refreshing the browser, the results seem to fall apart. They were turning to garbage when the page used caching. If I turn off caching, then the page seems right but I lose my CSS formatting (stored in a separate file) and get an error that an included JS file contains invalid characters. Most of the resources I've found on the Web were either very old or focused on accessing IIS directly. My page is running on a shared hosting account and I do not have direct access to IIS7, which it's running on. protected void Application_BeginRequest(object sender, EventArgs e) { // Implement HTTP compression if (Request["HTTP_X_MICROSOFTAJAX"] == null) // Avoid compressing AJAX calls { // Retrieve accepted encodings string encodings = Request.Headers.Get("Accept-Encoding"); if (encodings != null) { // Verify support for or gzip (deflate takes preference) encodings = encodings.ToLower(); if (encodings.Contains("gzip") || encodings == "*") { Response.Filter = new GZipStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "gzip"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } else if (encodings.Contains("deflate")) { Response.Filter = new DeflateStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "deflate"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } } } } Is anyone having better success with this?

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >