Search Results

Search found 11991 results on 480 pages for 'cedric copy'.

Page 391/480 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • Boost::Mutex & Malloc

    - by M. Tibbits
    Hi all, I'm trying to use a faster memory allocator in C++. I can't use Hoard due to licensing / cost. I was using NEDMalloc in a single threaded setting and got excellent performance, but I'm wondering if I should switch to something else -- as I understand things, NEDMalloc is just a replacement for C-based malloc() & free(), not the C++-based new & delete operators (which I use extensively). The problem is that I now need to be thread-safe, so I'm trying to malloc an object which is reference counted (to prevent excess copying), but which also contains a mutex pointer. That way, if you're about to delete the last copy, you first need to lock the pointer, then free the object, and lastly unlock & free the mutex. However, using malloc to create a boost::mutex appears impossible because I can't initialize the private object as calling the constructor directly ist verboten. So I'm left with this odd situation, where I'm using new to allocate the lock and nedmalloc to allocate everything else. But when I allocate a large amount of memory, I run into allocation errors (which disappear when I switch to malloc instead of nedmalloc ~ but the performance is terrible). My guess is that this is due to fragmentation in the memory and an inability of nedmalloc and new to place nice side by side. There has to be a better solution. What would you suggest?

    Read the article

  • csc.exe not found error

    - by pokrate
    I have installed a fresh copy of windows xp 2002 with SP2, and then VS.net 2008 enterprise edition. I am trying to build a simplest possible web application, and its not compiling giving error csc.exe not found. I googled a lot, and spot the problem in the following section in web.config : <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="WarnAsError" value="false"/> </compiler> <compiler language="vb;vbs;visualbasic;vbscript" extension=".vb" warningLevel="4" type="Microsoft.VisualBasic.VBCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="OptionInfer" value="true"/> <providerOption name="WarnAsError" value="false"/> </compiler> </compilers> </system.codedom> But if i remove the csharp compiler section , and then compile, it compiles fine with vb compiler section. And if I change the value from v3.5 to v2.0 in the of csharp section, then also it compiles fine. But then all my Linq Queries are not recognized by the compiler. But System.Linq and all classes present in it are accessible in the code. Please help in this weird behavior.

    Read the article

  • Unexpected end of file while searching for ']' to end attribute selector.

    - by zurna
    I dont understand what would be the problem with the following code. It needs to copy image's id value to another textbox but instead I get an error. Unexpected end of file while searching for ']' to end attribute selector. <script> $(function() { $(".floatLeft").click(function() { var id = $(this).attr("id").replace(/\D/g, ""); $("input[name='photo[" + id + "]'").val(Math.abs($("input[name='photo[" + id + "]'").val() - 1)); }); }); </script> <ul class="thumbs"> <li> <img src="/FLPM/media/news/images/2M9Y1I2K_sm.jpg" alt="Garden" id="28" class="floatLeft" /> <input type="text" name="photo28" value="0" /> <br /> <a href="?Process=&IMAGEID=28" class="thumb"><span class="floatLeft">DELETE</span></a> </li> <li> <img src="/FLPM/media/news/images/2A9L1V2X_sm.jpg" alt="Frangipani Flowers" id="27" class="floatLeft" /> <input type="text" name="photo27" value="0" /> <br /> <a href="?Process=&IMAGEID=27" class="thumb"><span class="floatLeft">DELETE</span></a> </li> </ul>

    Read the article

  • Deployments and TFS, general questions

    - by Velika
    SOX requires that we have a separate group deploy our ASP.NET web to production. Currently, that group has access to our current code repository in VSS and uses VSS to deploy code that has been checked into VSS. How are deployments typically done for web applications? As a developer, I have used the Deploy function in Visual Studio to deploy code to a network share which corresponds to a IS virtual folder, but I don't think we can expect that the deployment group will be purchasing a copy of Visual Studio just to do deployments. We could check the code into TFS, but what is the minimum software that that group would need to perform the deployment? Would a Team Explorer Client Access suffice? I am aware that Team System has functionality to automate the building of an application. Do people typically deploy to Production by copying aspx and dlls files from the QA environment to production or do you normally deploy from TFS or even VS directly? It seems to me that the preferred approach would be to deploy from the QA environment, since that is the environment that must have been approved for release or that those files should be checked into TFS and the deployed from TFS, assuming you can deploy from TFS. What confuses me is whether bin (binary) files that are local to the project-do they go into TFS? Is so, doesn't this create problems for other developers in that only 1 developers-the one with the binary checked - can actually debug because debugging requires write access to the binaries? Does this mean that the binaries shouldn't be checked into TFS? But eventually, if you deploy from TFS, the binaries HAVE to be added to TFS. Are they added as a separate (compiled) application node? If so,m this sounds real ugly. I would assume not. How does one ensure that the binaries match the source code that we mark with a particular version number? Obviously, I'm clueless. Can someone give me a general idea of how you handle version control and deployments in particular using TFS?

    Read the article

  • How can I sync files in two different git repositories (not clones) and maintain history?

    - by brian d foy
    I maintain two different git repos that need to share some files, and I'd like the commits in one repo to show up in the other. What's a good way to do that for ongoing maintenance? I've been one of the maintainers of the perlfaq (Github), and recently I fell into the role of maintaining the Perl core documentation, which is also in git. Long before I started maintaining the perlfaq, it lived in a separate source control repository. I recently converted that to git. Periodically, one of the perl5-porters would sync the shared files in the perlfaq repo and the perl repo. Since we've switched to git, we'e been a bit lazy converting the tools, and I'm now the one who does that. For the time being, the two repos are going to stay separate. Currently, to sync the FAQ for a new (monthly) release of perl, I'm almost ashamed to say that I merely copy the perlfaq*.pod files in the perlfaq repo and overlay them in the perl repo. That loses history, etc. Additionally, sometimes someone makes a change to those files in the perl repo and I end up overwriting it (yes, check git diff you idiot!). The files do not have the same paths in the repo, but that's something that I could change, I think. What I'd like to do, in the magical universe of rainbows and ponies, is pull the objects from the perlfaq repo and apply them in the perl repo, and vice-versa, so the history and commit ids correspond in each. Creating patches works, but it's also a lot work to manage it Git submodules seem to only work to pull in the entire external repo I haven't found something like svn's file externals, but that would work in both directions anyway I'd love to just fetch objects from one and cherry-pick them in the other What's a good way to manage this?

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

  • git clone fails with "index-pack" failed?

    - by gct
    So I created a remote repo that's not bare (because I need redmine to be able to read it), and it's set to be shared with the group (so git init --shared=group). I was able to push to the remote repo and now I'm trying to clone it. If I clone it over the net I get this: remote: Counting objects: 4648, done. remote: Compressing objects: 100% (2837/2837), done. error: git-upload-pack: git-pack-objects died with error.B/s fatal: git-upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed I'm able to clone it locally without a problem, and I ran "git fsck", which only reports some dangling trees/blobs, which I understand aren't a problem. What could be causing this? I'm still able to pull from it, just not clone. I should note the remote git version is 1.5.6.5 while local is 1.6.0.4 I tried cloning my local copy of the repo, stripping out the .git folder and pushing to a new repo, then cloning the new repo and I get the same error, which leads me to believe it may be a file in the repo that's causing git-upload-pack to fail... Edit: I have a number of windows binaries in the repo, because I just built the python modules and then stuck them in there so everyone else didn't have to build them as well. If I remove the windows binaries and push to a new repo, I can clone again, perhaps that gives a clue. Trying to narrow down exactly what file is causing the problem now.

    Read the article

  • Svn repository split problem

    - by Tuminoid
    I want to split a directory from a large Subversion repository to a repository of its own, and keep the history of the files in that directory. I tried the regular way of doing it first svnadmin dump /path/to/repo > largerepo.dump cat largerepo.dump | svndumpfilter include my/directory >mydir.dump but that does not work, since the directory has been moved and copied over the years and files have been moved into and out of it to other parts of the repository. The result is a lot of these: svndumpfilter: Invalid copy source path '/some/old/path' Next thing I tried is to include those /some/old/path as they appear and after a long, long list of files and directories included, the svndumpfilter completes, BUT importing the resulting dump isn't producing the same files as the current directory has. So, how do I properly split the directory from that repository while keeping the history? EDIT: I specifically want trunk/myproj to be the trunk in a new repository PLUS have the new repository include none of the other old stuff, ie. there should not be possibility for anyone to update to old revision before the split and get/see the files. The svndumpfilter solution I tried would achieve exactly that, sadly its not doable since the path/files have been moved around. The solution by ng isn't accetable since its basically a clone+removal of extras which keeps ALL the history, not just relevant myproj history. BUMP C'moon, there must be someone who definitely knows if this is doable or not, and how!

    Read the article

  • Use external datasource with NUnit's TestCaseAttribute

    - by Hamman359
    Is it possible to get the values for a TestCaseAttribute from an external data source such as an Excel Spreadsheet, CSV file or Database? i.e. Have a .csv file with 1 row of data per test case and pass that data to NUnit one at a time. Here's the specific situation that I'd like to use this for. I'm currently merging some features from one system into another. This is pretty much just a copy and paste process from the old system into the new one. Unfortunately, the code being moved not only does not have any tests, but is not written in a testable manner (i.e. tightly coupled with the database and other code.) Taking the time to make the code testable isn't really possible since its a big mess, i'm on a tight schedule and the entire feature is scheduled to be re-written from the ground up in the next 6-9 months. However, since I don't like the idea of not having any tests around the code, I'm going to create some simple Selenium tests using WebDriver to test the page through the UI. While this is not ideal, it's better than nothing. The page in question has about 10 input values and about 20 values that I need to assert against after the calculations are completed, with about 30 valid combinations of values that I'd like to test. I already have the data in a spreadsheet so it'd be nice to simply be able to pull that out rather than having to re-type it all in Visual Studio.

    Read the article

  • ASP.NET MVC - Binding a Child Entity to the Model

    - by Nathan Taylor
    This one seems painfully obvious to me, but for some reason I can't get it working the way I want it to. Perhaps it isn't possible the way I am doing it, but that seems unlikely. This question may be somewhat related: http://stackoverflow.com/questions/1274855/asp-net-mvc-model-binding-related-entities-on-same-page. I have an EditorTemplate to edit an entity with multiple related entity references. When the editor is rendered the user is given a drop down list to select related entities from, with the drop down list returning an ID as its value. <%=Html.DropDownListFor(m => m.Entity.ID)%> When the request is sent the form value is named as expected: "Entity.ID", however my strongly typed Model defined as an action parameter doesn't have Entity.ID populated with the value passed in the request. public ActionResult AddEntity(EntityWithChildEntities entityWithChildEntities) { } I tried fiddling around with the Bind() attribute and specified Bind(Include = "Entity.ID") on the entityWithChildEntities, but that doesn't seem to work. I also tried Bind(Include = "Entity"), but that resulted in the ModelBinder attempting to bind a full "Entity" definition (not surprisingly). Is there any way to get the default model binder to fill the child entity ID or will I need to add action parameters for each child entity's ID and then manually copy the values into the model definition?

    Read the article

  • Can I perform some processing on the POST data before ASP.NET MVC UpdateModel happens?

    - by Domenic
    I would like to strip out non-numeric elements from the POST data before using UpdateModel to update the copy in the database. Is there a way to do this? // TODO: it appears I don't even use the parameter given at all, and all the magic // happens via UpdateModel and the "controller's current value provider"? [HttpPost] public ActionResult Index([Bind(Include="X1, X2")] Team model) // TODO: stupid magic strings { if (this.ModelState.IsValid) { TeamContainer context = new TeamContainer(); Team thisTeam = context.Teams.Single(t => t.TeamId == this.CurrentTeamId); // TODO HERE: apply StripWhitespace() to the data before using UpdateModel. // The data is currently somewhere in the "current value provider"? this.UpdateModel(thisTeam); context.SaveChanges(); this.RedirectToAction(c => c.Index()); } else { this.ModelState.AddModelError("", "Please enter two valid Xs."); } // If we got this far, something failed; redisplay the form. return this.View(model); } Sorry for the terseness, up all night working on this; hopefully my question is clear enough? Also sorry since this is kind of a newbie question that I might be able to get with a few hours of documentation-trawling, but I'm time-pressured... bleh.

    Read the article

  • Why do I get "Error 6060" when I try to use DBD::Advantage with a 64-bit perl on Linux?

    - by WarheadsSE
    I realize that I am attempting to go beyond the "supported" behavior of the manf's released drivers for Perl, after all they have only released it in package with x86 .so's. However, since I cannot use their package with x64 Perl on a RHEL 5.4 x86_64 box, and maintaining a seperate install of x86 Perl just for this one package, I have made an attempt to get this puppy working thanks to released 64-bit .so's that accompany other driver packages for Advantage. What I have done to this point: download beta 10 DBI drivers, in 32 download beta 10 PHP extension (it contains 32 and x86_64) copy the required DLLs into the ads-lib location (eg /usr/local/ads/lib64) compile the Perl DBI driver with the path to the lib64's .so's Good compilation, good install, good use. The problem is that I always get : failed: [iAnywhere Solutions][Advantage SQL][ASA] Error 6060: Advantage Database Server not available on specified server. axServerConnect (SQL-HY000)(DBD: db_login/SQLConnect err=-1) Does anyone have any ideas? EDIT: fixed package name in post title EDIT: Updated title. It appears that it's not just the x64 perl, but the RHEL 5.4 underneath that may be interfering. As commented below, I managed to shoe-horn a x86 perl onto the system, and compile the DBD::Advantage 9.99, and later replacing that with 9.10, and none of these x86 would connect either. Neither library (9.99 or 9.10) in either bit-edness will connect from this x86_64 server to the windows server's UNC path. I have successfully mounted this share without problems, but still I cannot seem to connect to the 9.1. I have tried: \hostname\PATH \FQDN\PATH \IP\PATH and all of these variations with the port (default) 6262 included. My windows machine connects fine, with both 9.1 and 9.99 from strawberry perl.

    Read the article

  • How do I best remove the unicode characters that XHTML regards as non-valid using php?

    - by Andrew Stacey
    I run a forum designed to support an international mathematics group. I've recently switched it to unicode for better support of international characters. In debugging this conversion, I've discovered that not all unicode characters are considered as valid XHTML (the relevant website appears to be http://www.w3.org/TR/unicode-xml/). One of the steps that the forum software goes through before presenting the posts to the browser is an XHTML validation/sanitisation step. It seems a reasonable idea that at that stage it should remove any unicode characters that XHTML doesn't like. So my question is: Is there a standard (or best) way of doing this in PHP? (The forum is written in PHP, by the way.) I guess that the failsafe would be a simple str_replace (if that's also the best, do I need to do anything extra to make sure it works properly with unicode?) but that would involve me having to go through the XHTML DTD (or the above-referenced W3 page) carefully to figure out what characters to list in the search part of str_replace, so if this is the best way, has someone already done that so that I can steal, err, copy, it? (Incidentally, the character that caused the problem was U+000C, the 'formfeed', which (according to the W3 page) is valid HTML but invalid XHTML!)

    Read the article

  • plupload with webpy.

    - by markus
    Hi, i have a problem. I want to upload a file with plupload with the HML5 runtime. This is my html/js code : jQuery(function(){ jQuery("#uploader").pluploadQueue({ // General settings runtimes : 'html5', name : 'file', url : 'http://server.name/addContent', max_file_size : '${maxSize}$_("GB")', }); jQuery('#form_upload_file').submit(function(e) { var uploader = jQuery('#uploader').pluploadQueue(); // Validate number of uploaded files if (uploader.total.uploaded == 0) { // Files in queue upload them first if (uploader.files.length > 0) { // When all files are uploaded submit form uploader.bind('UploadProgress', function() { if (uploader.total.uploaded == uploader.files.length) jQuery('#form_upload_file').submit(); }); uploader.start(); } else alert('You must at least upload one file.'); e.preventDefault(); } }); }); <form id="form_upload_file" action="#" method="POST"> <div id="uploader"></div> <input type="hidden" name="token" value="token" /> <input type="hidden" name="idUser" value="$idUser" /> </form> So, when i click in the button to upload(the submit() method is not called), it does an OPTIONS HTTP request to my server so i don't know what i must do to save the file? this is my webpy code : def OPTIONS(self): web.header('Content-type', 'text/plain: charset=utf-8') web.header('Cache-Control', 'no-store, no-cache, must-revalidate') web.header('Cache-Control', 'post-check=0, pre-check=0', False) web.header('Pragma', 'no-cache') def POST(self): input = web.input(_unicode=False, file={})#on récupère les input self.copy(input.file.file) etc. any idea ? thanks.

    Read the article

  • Java 2D Resize

    - by jon077
    I have some old Java 2D code I want to reuse, but was wondering, is this the best way to get the highest quality images? public static BufferedImage getScaled(BufferedImage imgSrc, Dimension dim) { // This code ensures that all the pixels in the image are loaded. Image scaled = imgSrc.getScaledInstance( dim.width, dim.height, Image.SCALE_SMOOTH); // This code ensures that all the pixels in the image are loaded. Image temp = new ImageIcon(scaled).getImage(); // Create the buffered image. BufferedImage bufferedImage = new BufferedImage(temp.getWidth(null), temp.getHeight(null), BufferedImage.TYPE_INT_RGB); // Copy image to buffered image. Graphics g = bufferedImage.createGraphics(); // Clear background and paint the image. g.setColor(Color.white); g.fillRect(0, 0, temp.getWidth(null),temp.getHeight(null)); g.drawImage(temp, 0, 0, null); g.dispose(); // j2d's image scaling quality is rather poor, especially when // scaling down an image to a much smaller size. We'll post filter // our images using a trick found at // http://blogs.cocoondev.org/mpo/archives/003584.html // to increase the perceived quality.... float origArea = imgSrc.getWidth() * imgSrc.getHeight(); float newArea = dim.width * dim.height; if (newArea <= (origArea / 2.)) { bufferedImage = blurImg(bufferedImage); } return bufferedImage; } public static BufferedImage blurImg(BufferedImage src) { // soften factor - increase to increase blur strength float softenFactor = 0.010f; // convolution kernel (blur) float[] softenArray = { 0, softenFactor, 0, softenFactor, 1-(softenFactor*4), softenFactor, 0, softenFactor, 0}; Kernel kernel = new Kernel(3, 3, softenArray); ConvolveOp cOp = new ConvolveOp(kernel, ConvolveOp.EDGE_NO_OP, null); return cOp.filter(src, null); }

    Read the article

  • Update a PDF to include an encrypted, hidden, unique identifier?

    - by Dave Jarvis
    Background The idea is this: Person provides contact information for online book purchase Book, as a PDF, is marked with a unique hash Person downloads book PDF passwords are annoying and extremely easy to circumvent. The ideal process would be something like: Generate hash based on contact information Store contact information and hash in database Acquire book lock Update an "include" file with hash text Generate book as PDF (using pdflatex) Apply hash to book Release book lock Send email with book download link Technologies The following technologies can be used (other programming languages are possible, but libraries will likely be limited to those supplied by the host): C, Java, PHP LaTeX files PDF files Linux Question What programming techniques (or open source software) should I investigate to: Embed a unique hash (or other mark) to a PDF Create a collusion-attack resistant mark Develop a non-fragile (e.g., PDF -> EPS -> PDF still contains the mark) solution Research I have looked at the following possibilities: Steganography Natural Language Processing (NLP) Convert blank pages in PDF to images; mark those images; reassemble PDF LaTeX watermark package ImageMagick Steganograhy requires keeping a master copy of the images, and I'm not sure if the watermark would survive PDF -> EPS -> PDF, or other types of conversion. LaTeX creates an image cache, so any steganographic process would have to intercept that process somehow. NLP introduces grammatical errors. Inserting blank pages as images is immediately suspect; it is easy to replace suspicious blank pages. The LaTeX watermark package draws visible marks. ImageMagick draws visible marks. What other solutions are possible? Related Links http://www.tcpdf.org/ invisible watermarks in images Thank you!

    Read the article

  • Bash: how to simply parallelize tasks?

    - by NoozNooz42
    I'm writing a tiny script that calls the "PNGOUT" util on a few hundred PNG files. I simply did this: find $BASEDIR -iname "*png" -exec pngout {} \; And then I looked at my CPU monitor and noticed only one of the core was used, which is quite sad. In this day and age of dual, quad, octo and hexa (?) cores desktop, how do I simply parallelize this task with Bash? (it's not the first time I've had such a need, for quite a lot of these utils are mono-threaded... I already had the case with mp3 encoders). Would simply running all the pngout in the background do? How would my find command look like then? (I'm not too sure how to mix find and the '&' character) I if have three hundreds pictures, this would mean swapping between three hundreds processes, which doesn't seem great anyway!? Or should I copy my three hundreds files or so in "nb dirs", where "nb dirs" would be the number of cores, then run concurrently "nb finds"? (which would be close enough) But how would I do this?

    Read the article

  • How can you handle cross-cutting conerns in JAX-WS without Spring or AOP? Handlers?

    - by LES2
    I do have something more specific in mind, however: Each web service method needs to be wrapped with some boiler place code (cross cutting concern, yes, spring AOP would work great here but it either doesn't work or unapproved by gov't architecture group). A simple service call is as follows: @WebMethod... public Foo performFoo(...) { Object result = null; Object something = blah; try { soil(something); result = handlePerformFoo(...); } catch(Exception e) { throw translateException(e); } finally { wash(something); } return result; } protected abstract Foo handlePerformFoo(...); (I hope that's enough context). Basically, I would like a hook (that was in the same thread - like a method invocation interceptor) that could have a before() and after() that could could soil(something) and wash(something) around the method call for every freaking WebMethod. Can't use Spring AOP because my web services are not Spring managed beans :( HELP!!!!! Give advice! Please don't let me copy-paste that boiler plate 1 billion times (as I've been instructed to do). Regards, LES

    Read the article

  • How to handle "Remember me" in the Asp.Net Membership Provider

    - by RemotecUk
    Ive written a custom membership provider for my ASP.Net website. Im using the default Forms.Authentication redirect where you simply pass true to the method to tell it to "Remember me" for the current user. I presume that this function simply writes a cookie to the local machine containing some login credential of the user. What does ASP.Net put in this cookie? Is it possible if the format of my usernames was known (e.g. sequential numbering) someone could easily copy this cookie and by putting it on their own machine be able to access the site as another user? Additionally I need to be able to inercept the authentication of the user who has the cookie. Since the last time they logged in their account may have been cancelled, they may need to change their password etc so I need the option to intercept the authentication and if everything is still ok allow them to continue or to redirect them to the proper login page. I would be greatful for guidance on both of these two points. I gather for the second I can possibly put something in global.asax to intercept the authentication? Thanks in advance.

    Read the article

  • How to html encode the output of an MVC view?

    - by jessegavin
    I am building a web app which will generate lots of different code templates (HTML tables, XML documents, SQL scripts) that I want to render on screen as encoded HTML so that people can copy and paste those templates. I would like to be able to use asp.net mvc views to generate the code for these templates (rather than, say, using a StringBuilder). Is there a way using asp.net mvc to store the results of a rendered view into a string? Something like the following perhaps? public ContentResult HtmlTable(string format) { var m = new MyViewModel(); m.MyDataElements = _myDataRepo.GetData(); // Somehow render the view and store it as a string? // Not sure how to achieve this part. var viewHtml = View(m); var htmlEncodedView = Server.HtmlEncode(viewHtml); return Content(htmlEncodedView); } NOTE: My original question mentioned NHaml views specifically, but I realized that this wasn't view engine specific. So if you see answers related to NHaml, that's why.

    Read the article

  • Does somebody know something about Flash-CS5' "multiple copies" bug?

    - by Slav
    I'm, generally, complaining, not asking, but if somebody will be able (Adobe isn't) to help - it will be perfect. I use Flash-CS5 IDE to compose .swc files to import it, then, into FlashDevelop. Evetyrhing works fine until SOMETIMES. About each 10th "Ctrl+S" (save) click Flash-CS5 duplicates ALL images (loaded .bmp, .jpg, .png... files) and names them like "Copy ..." where "..." is incrementing number. Furhtermore, Flash-CS5 creates duplications of some layers - other layers have duplications of items which put on them. Restoring project's state to previous (not copied) state - is the hell. This bug is not reproducible but it happens each 2 hours or so - both right after project's opening and after several hours of work. It happens on different machines - I know at least 5 people which told me that they also run into this issue and do not know how to escape it. Did somebody experienced it? Does somebody know how to handle it?

    Read the article

  • Rendering LaTeX on third-party websites?

    - by A. Rex
    There are some sites on the web that render LaTeX into some more readable form, such as Wikipedia, some Wordpress blogs, and MathOverflow. They may use images, MathML, jsMath, or something like that. There are other sites on the web where LaTeX appears inline and is not rendered, such as the arXiv, various math forums, or my email. In fact, it is quite common to see an arXiv paper's abstract with raw LaTeX in it, e.g. this paper. Is there a plugin available for Firefox, or would it be possible to write one, that renders LaTeX within pages that do not provide a rendering mechanism themselves? (The LaTeX would be enclosed within dollar signs, e.g. $\pi$. See the arXiv link above.) Some notes: It may be impossible to render some of the code, because authors often copy-paste code directly from their source TeX files, which may contain things like "\cite{foo}" or undefined commands. These should be left alone. This question is a repost of a question from MathOverflow that was closed for not being related to math. There is one answer there, which is helpful, but perhaps Stack Overflow can provide better answers. I program a lot, but Javascript is not my specialty, so comments along the lines of "look at this library" are not particularly helpful to me (but may be to others).

    Read the article

  • Adding changes from one Mercurial repository to another

    - by Patrik Hägne
    When changing the VCS for my project FakeItEasy from SVN to Mercurial on Google Code I was a bit too eager (I'm funny like that). What I did was just checking the latest version out of SVN and then commiting that checkout as the first revision of the new Mercurial repo. This obviously has the effect that all history is lost. Later when getting a bit better acustomed to Mercurial I realized that there is such a thing as a "convert extension" that allows you to convert a SVN repo into a Mercurial repo. Now what I want to do is to convert the old SVN repo and then have all change sets from the currently existing Mercurial repo imported into this converted repo except the very first commit to Mercurial. I've converted the SVN repo to a local Mercurial repo but now is when I'm stuck. I thought I'd be able to use the convert extension to bring the current Mercurial repository into the converted one and having a splice map remove the first commit but I can not seem to get this to work. I've also tried to just use convert without splice map to get all change sets from the current Mercurial repo into the converted one and the rebase the second version in the current to the last commit from the old SVN repository but I can't get that to work either. To make this clearer lets say I have these two repositories: A: revA1-revA2 B: revB1-revB2-revB3 (Where revB1 is actually a copy of revA2) Now I want to combine these two into the new repository containing this: C: revA1-revA2-revB2-revB3

    Read the article

  • How can a program be detected as running?

    - by ryeguy
    I have written a program that is sort of an unofficial, standalone plugin for an application. It allows customers to get a service that is a lower priced alternative then the vendor-owned one. My program is not illegal, against any kind of TOS, and is certainly not a virus, adware, or anything like that. That being said, the vendor of course is not happy about me taking his competition, and is trying to block my application from running. He has already tried some tactics to stop people from running my app alongside his. He makes it so if it is detected, his app throws a fake error. First, he checked to see if my program was running by looking for an open window with the right title. I countered this by randomizing the program title at startup. Next, he looked for the running process name. I countered this by making the app copy itself when it is started as [random string].exe and then running that. Anyways, my question is this: what else can he do to detect if my program running? I know that you can read window text (ie status bar, labels). I'm prepared to counter this by replacing the labels with images (ugh, any other way?). But what else is there? Can you detect what .dlls a program has loaded? If so, could this be solved by randomizing the dll names before loading them? I know that it's possible to get a program's signature in memory and track it that way (like a virus scanner), but the chances of him doing that probably aren't good because that sounds pretty advanced. Even though this is kinda crappy of him to be doing, its kind of fun. It's like a nerdy fist fight. EDIT: When I said it's a plugin, that is just the (incorrect) term I used. It's a standalone EXE. The "API" between my program and the other is mine is simply entering data into the controls (like textboxes, etc).

    Read the article

  • How to use boost::bind with non-copyable params, for example boost::promise ?

    - by zhengxi
    Some C++ objects have no copy constructor, but have move constructor. For example, boost::promise. How can I bind those objects using their move constructors ? #include <boost/thread.hpp> void fullfil_1(boost::promise<int>& prom, int x) { prom.set_value(x); } boost::function<void()> get_functor() { // boost::promise is not copyable, but movable boost::promise<int> pi; // compilation error boost::function<void()> f_set_one = boost::bind(&fullfil_1, pi, 1); // compilation error as well boost::function<void()> f_set_one = boost::bind(&fullfil_1, std::move(pi), 1); // PS. I know, it is possible to bind a pointer to the object instead of // the object itself. But it is weird solution, in this case I will have // to take cake about lifetime of the object instead of delegating that to // boost::bind (by moving object into boost::function object) // // weird: pi will be destroyed on leaving the scope boost::function<void()> f_set_one = boost::bind(&fullfil_1, boost::ref(pi), 1); return f_set_one; }

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >