Search Results

Search found 15301 results on 613 pages for 'global assembly cache'.

Page 535/613 | < Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >

  • forcing a download using PHP / jQuery

    - by Dirty-flow
    I know there are already many questions about forcing a download with PHP, but I can't find what I'm doing wrong and what should I do. I'm having an list with filenames, and I want to download one of them by clicking a button. My jQuery: $(".MappeDownload").on("click",function(e){ e.stopPropagation(); fileId=$(this).val() $.post("ajax/DownloadFile.php",{ id : fileId}) }) and on the server side I have a table with the file names and the file path. $sql = "SELECT vUploadPfad, vUploadOriginname FROM tabUpload WHERE zUploadId='$_POST[id]'"; $result = mysql_query($sql) or die(""); $file = mysql_fetch_array($result); $localfile = $file["vUploadPfad"]; $name=$file["vUploadOriginname"]; $fp = fopen($localfile, 'rb'); header("Cache-Control: "); header("Pragma: "); header("Content-Type: application/octet-stream"); header("Content-Length: " . filesize($localfile)); header("Content-Disposition: attachment; filename='".$name."';"); header("Content-Transfer-Encoding: binary\n"); fpassthru($fp); exit; The AJAX request is successful, I'm getting the right header(filesize, filename etc...) but the download are not starting.

    Read the article

  • Create new table with Wordpress API

    - by Fire G
    I'm trying to create a new plugin to track popular posts based on views and I have everything done and ready to go, but I can't seem to create a new table using the Wordpress API (I can do it with standard PHP or with phpMyAdmin, but I want this plugin to be self-sufficient). I've tried several ways ($wpdb-query, $wpdb-get_results, dbDelta) but none of them will create the new table. function create_table(){ global $wpdb; $tablename = $wpdb->prefix.'popular_by_views'; $ppbv_table = $wpdb->get_results("SHOW TABLES LIKE '".$tablename."'" , ARRAY_N); if(is_null($ppbv_table)){ $create_table_sql = "CREATE TABLE '".$tablename."' ( 'id' BIGINT(50) NOT NULL AUTO_INCREMENT, 'url' VARCHAR(255) NOT NULL, 'views' BIGINT(50) NOT NULL, PRIMARY KEY ('id'), UNIQUE ('id') );"; $wpdb->show_errors(); $wpdb->flush(); if(is_null($wpdb->get_results("SHOW TABLES LIKE '".$tablename."'" , ARRAY_N))) echo 'crap, the SQL failed.'; } else echo 'table already exists, nothing left to do.';}

    Read the article

  • What's the difference between the [OptionalField] and [NonSerialized]

    - by IbrarMumtaz
    I came across this question on transcender: What should you apply to a field if its value is not required during deserialization? Me = [NonSerialized], ANSWER = [OptionalField] My gut reaction was NonSerialised, I have no idea why but in the space of 5 seconds thats what I thought but to my surprise, Transcender says I am wrong. OK fair enough .... but why? looking more closely at the question I have a good idea what to look out for as far as the [Nonseralized] attribute is concerned but still I would really like this clearing up. As far as I can tell the former has relationship with versioning conflicts between newer and older versions of the same assembly. The later is more concerned with not serializing a field FULLSTOP. Is there anything else that might pick these two apart? MSDN does not really say much about this as they both are used on the BinaryFormatters and SoapFormatter with the XMLFormatter using the XMLIgnoreAttribute. My second question is can you mix and match either one of the two attributes ... I am yet to use them as I have not had an excuse to mess about with them. So my curiosity can only go so far. Just throwing this one out there, but does my answer have something to do with the way [OnDeserialized] and the IdeserilizationCallback interface is implemented???? Am guessing here .... Thanks In Advance UPDATE: I know that optional field attribute does not serialize the value held by a data member but NonSerialized will not even serialise the data member or its value. That sounds about a right???? That's all I got on these two attributes.

    Read the article

  • PHP OO vs Procedure with AJAX

    - by vener
    I currently have a AJAX heavy(almost everything) intranet webapp for a business. It is highly modularized(components and modules ala Joomla), with plenty of folders and files. ~ 80-100 different viewing pages (each very unique in it's own sense) on last count and will likely to increase in the near future. I based around the design around commands and screens, the client request a command and sends the required data and receives the data that is displayed via javascript on the screen. That said, there are generally two types of files, a display files with html, javascript, and a little php for templating. And also a php backend file with a single switch statement with actions such as, save, update and delete and maybe other function. There is very little code reuse. Recently, I have been adding an server sided undo function that requires me to reuse some code. So, I took the chance to try out OOP but I notice that some functions are so simple, that creating a class, retrieving all the data then update all the related rows on the database seems like overkill for a simple action as speed is quite critical. Also I noticed there is only one class in an entire file. So, what if the entire php is a class. So, between creating a class and methods, and using global variables and functions. Which is faster?

    Read the article

  • Why is only the first shown window focusable

    - by Miha Markic
    Imagine the code below. Only the first window appears on the top, all of subsequent windows won't nor can they be programatically focused for some reason (they appear in the background). Any idea how to workaround this? BTW, static methods/properties are not allowed nor is any global property. [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Thread t1 = new Thread(CreateForm); t1.SetApartmentState(ApartmentState.STA); t1.Start(); t1.Join(); t1 = new Thread(CreateForm); t1.SetApartmentState(ApartmentState.STA); t1.Start(); t1.Join(); } private static void CreateForm() { using (Form f = new Form()) { System.Windows.Forms.Timer t = new System.Windows.Forms.Timer { Enabled = true, Interval = 2000 }; t.Tick += (s, e) => { f.Close(); t.Enabled = false; }; f.TopMost = true; Application.Run(f); } }

    Read the article

  • Which options do I have for Java process communication?

    - by Dmitriy Matveev
    We have a place in a code of such form: void processParam(Object param) { wrapperForComplexNativeObject result = jniCallWhichMayCrash(param); processResult(result); } processParam - method which is called with many different arguments. jniCallWhichMayCrash - a native method which is intended to do some complex processing of it's parameter and to create some complex object. It can crash in some cases. wrapperForComplexNativeObject - wrapper type generated by SWIG processResult - a method written in pure Java which processes it's parameter by creation of several kinds (by the kinds I'm not meaning classes, maybe some like hierarchies) of objects: 1 - Some non-unique objects which are referencing each other (from the same hierarchy), these objects can have duplicates created from the invocations of processParam() method with different parameter values. Since it's costly to keep all the duplicates it's necessary to cache them. 2 - Some unique objects which are referencing each other (from the same hierarchy) and some of the objects of 1st kind. After processParam is executed for each of the arguments from some set the data created in processResult will be processed together. The problem is in fact that jniCallWhichMayCrash method may crash the entire JVM and this will be very bad. The reason of crash may be such that it can happen for one argument value and not for the other. We've decided that it's better to ignore crashes inside of JVM and just skip some chunks of data when such crashes occur. In order to do this we should run processParam function inside of separate process and pass the result somehow (HOW? HOW?! This is a question) to the main process and in case of any crashes we will only lose some part of data (It's ok) without lose of everything else. So for now the main problem is implementation of transport between different processes. Which options do I have? I can think about serialization and transmitting of binary data by the streams, but serialization may be not very fast due to object complexity. Maybe I have some other options of implementing this?

    Read the article

  • Login fails when recreating database with Code First

    - by Mun
    I'm using ASP.NET Entity Framework's Code First to create my database from the model, and the login seems to fail when the database needs to be recreated after the model changes. In Global.asax, I've got the following: protected void Application_Start() { Database.SetInitializer(new DropCreateDatabaseIfModelChanges<EntriesContext>()); // ... } In my controller, I've got the following: public ActionResult Index() { // This is just to force the database to be created var context = new EntriesContext(); var all = (from e in context.Entries select e).ToList(); } When the database doesn't exist, it is created with no problems. However, when I make a change to the model, rebuild and refresh, I get the following error: Login failed for user 'sa'. My connection string looks like this: <add name="EntriesContext" connectionString="Server=(LOCAL);Database=MyDB;User Id=sa;Password=password" providerName="System.Data.SqlClient" /> The login definitely works as I can connect to the server and the database from Management Studio using these credentials. If I delete the database manually, everything works correctly and the database is recreated as expected with the schema reflecting the changes made to the model. It seems like either the password or access to the database is being lost. Is there something else I need to do to get this working?

    Read the article

  • how to make a CUDA Histogram kernel?

    - by kitw
    Hi all, I am writing a CUDA kernel for Histogram on a picture, but I had no idea how to return a array from the kernel, and the array will change when other thread read it. Any possible solution for it? __global__ void Hist( TColor *dst, //input image int imageW, int imageH, int*data ){ const int ix = blockDim.x * blockIdx.x + threadIdx.x; const int iy = blockDim.y * blockIdx.y + threadIdx.y; if(ix < imageW && iy < imageH) { int pixel = get_red(dst[imageW * (iy) + (ix)]); //this assign specific RED value of image to pixel data[pixel] ++; // ?? problem statement ... } } @para d_dst: input image TColor is equals to float4. @para data: the array for histogram size [255] extern "C" void cuda_Hist(TColor *d_dst, int imageW, int imageH,int* data) { dim3 threads(BLOCKDIM_X, BLOCKDIM_Y); dim3 grid(iDivUp(imageW, BLOCKDIM_X), iDivUp(imageH, BLOCKDIM_Y)); Hist<<<grid, threads>>>(d_dst, imageW, imageH, data); }

    Read the article

  • How can multiple variables be passed to a function cleanly in C?

    - by aquanar
    I am working on an embedded system that has different output capabilities (digital out, serial, analog, etc). I am trying to figure out a clean way to pass many of the variables that will control those functions. I don't need to pass ALL of them too often, but I was hoping to have a function that would read the input data (in this case from a TCP network), and then parse the data (IE, the 3rd byte contains the states of 8 of the digital outputs (according to which bit in that byte is high or low)), and put that into a variable where I can then use elsewhere in the program. I wanted that function to be separate from the main() function, but to do so would require passing pointers to some 20 or so variables that it would be writing to. I know I could make the variables global, but I am trying to make it easier to debug by making it obvious when a function is allowed to edit that variable, by passing it to the function. My best idea was a struct, and just pass a pointer to it, but wasn't sure if there was a more efficient way, especially since there is only really 1 function that would need to access all of them at once, while most others only require parts of the information that will be stored in this bunch of state variables. So anyway, is there a clean way to send many variables between functions at once that need to be edited?

    Read the article

  • artisteer wp-theme metadata (date, category) lost

    - by Mattias Svensson
    I am going nuts over wordpress and artisteer. I am trying something that used to be pretty straightforward - turning on and off the display of date and post category for my posts on my blog page. I find this in content.php global $post; theme_post_wrapper( array( 'id' => theme_get_post_id(), 'class' => theme_get_post_class(), 'title' => theme_get_meta_option($post->ID, 'theme_show_page_title') ? get_the_title() : '', 'heading' => theme_get_option('theme_single_article_title_tag'), 'before' => theme_get_metadata_icons('date', 'header'), 'content' => theme_get_content() ) ); And the instruction says that all you got to do is insert or remove 'date' in the 'before' line. I've done it back and forth with my content files and nothing changes on the output. I can't find the actual code that prints it all, wordpress used to be so simple before everything was dug down 10 levels deep and you now have to look through millions of different functions to find the simplest things... As you can probably tell, I usually don't work with WP =) But this is on my table now and I haven't stayed up to date with WP for a couple of years... Any input as to where I can find the variables is appreciated... I had expected to at some point find 'posted at '.echo($date).' in category '.echo($category) or something at least remotely similar...

    Read the article

  • Is Domain Anaemia appropriate in a Service Oriented Architecture?

    - by Stimul8d
    I want to be clear on this. When I say domain anaemia, I mean intentional domain anaemia, not accidental. In a world where most of our business logic is hidden away behind a bunch of services, is a full domain model really necessary? This is the question I've had to ask myself recently since working on a project where the "domain" model is in reality a persistence model; none of the domain objects contain any methods and this is a very intentional decision. Initially, I shuddered when I saw a library full of what are essentially type-safe data containers but after some thought it struck me that this particular system doesn't do much but basic CRUD operations, so maybe in this case this is a good choice. My problem I guess is that my experience so far has been very much focussed on a rich domain model so it threw me a little. The remainder of the domain logic is hidden away in a group of helpers, facades and factories which live in a separate assembly. I'm keen to hear what people's thoughts are on this. Obviously, the considerations for reuse of these classes are much simpler but is really that great a benefit?

    Read the article

  • MVC design pattern in complex iPad app: is one fat controller acceptable?

    - by nutsmuggler
    I am building a complex iPad application; think of it as a scrapbook. For the purpose of this question, let's consider a page with two images over it. My main view displays my doc data rendered as a single UIImage; this because I need to do some global manipulation over them. This is my DisplayView. When editing I need to instantiate an EditorView with my two images as subviews; this way I can interact with a single image, (rotate it, scale it, move it). When editing is triggered, I hide my DisplayView and show my EditorView. In a iPhone app, I'd associate each main view (that is, a view filling the screen) to a view controller. The problem is here there is just one view controller; I've considered passing the EditorView via a modal view controller, but it's not an option (there a complex layout with a mask covering everything and palettes over it; rebuilding it in the EditorView would create duplicate code). Presently the EditorView incorporates some logic (loads data from the model, invokes some subviews for fine editing, saves data back to the model); EditorView subviews also incorporate some logic (I manipulate images and pass them back to the main EditorView). I feel this logic belongs more to a controller. On the other hand, I am not sure making my only view controller so fat a good idea. What is the best, cocoa-ish implementation of such a class structure? Feel free to ask for clarifications. Cheers.

    Read the article

  • Jquery Modal Popup opens twice on Single Click with ASP.Net MVC3

    - by user1704379
    I am using Modal Popup in my MVC3 application it works fine but opens twice for a single Click on the link. The Modal pop is triggered from the 'Index' view of my Home Controller. I am calling a view 'PopUp.cshtml' in my modal popup. The related ActionMethod 'PopUp' for the respective view is in my 'Home' controller. Here is the code, Jquery code on layout.cshtml page, <script type="text/javascript"> $.ajaxSetup({ cache: false }); $(document).ready(function () { $(".openPopup").live("click", function (e) { e.preventDefault(); $("<div></div><p>") .attr("id", $(this).attr("data-dialog-id")) .appendTo("body") .dialog({ autoOpen: true, title: $(this).attr("data-dialog-title"), modal: true, height: 250, width: 900, left: 0, buttons: { "Close": function () { $(this).dialog("close"); } } }) .load(this.href); }); $(".close").live("click", function (e) { e.preventDefault(); $(this).dialog("close"); }); }); </script> cshtml code in 'PopUp.cshtml' @{ ViewBag.Title = "PopUp"; Layout = null; } <h2>PopUp</h2> <p> Hello this is a Modal Pop-Up </p> Call modal popup code in Index view of Home Controller, <p> @Html.ActionLink("Click here to open modal popup", "Popup", "Home",null, new { @class = "openPopup", data_dialog_id = "popuplDialog", data_dialog_title = "PopUp" }) </p> What am I doing wrong that the modal pop up opens twice ? Thanks in Advance !

    Read the article

  • Stepping through ASP.NET MVC in Action (2009) - and stuck on nunit issue

    - by Jen
    I seem to have missed something - in this step through it talks through downloading nunit and changing the original MSTest reference to NUnit. Which seems fine until it talks about running the test with UnitRun from JetBrains. I would have thought I could run nUnit to be able to run the test - but I load my project in the nUnit gui and I get "This assembly was not built with any known testing framework". This after running the Nunit-2.5.3.9346.msi. Or am I supposed to be able to run tests from within visual studio 2008? After some research I find this: http://www.jetbrains.com/unitrun/ (ie. it seems to be saying this is no longer supported and I'm thinking JetBrains Resharper may cost money?). I'm a little rusty on my NUnit experience. So how do I go ahead and run my test? Is the error message I'm getting considered abnormal? I've added a reference in my MvcApplication.Tests project to the nunit.framework. Is this the wrong reference to add? Thanks :)

    Read the article

  • My kernel only works in block (0,0)

    - by ZeroDivide
    I am trying to write a simple matrixMultiplication application that multiplies two square matrices using CUDA. I am having a problem where my kernel is only computing correctly in block (0,0) of the grid. This is my invocation code: dim3 dimBlock(4,4,1); dim3 dimGrid(4,4,1); //Launch the kernel; MatrixMulKernel<<<dimGrid,dimBlock>>>(Md,Nd,Pd,Width); This is my Kernel function __global__ void MatrixMulKernel(int* Md, int* Nd, int* Pd, int Width) { const int tx = threadIdx.x; const int ty = threadIdx.y; const int bx = blockIdx.x; const int by = blockIdx.y; const int row = (by * blockDim.y + ty); const int col = (bx * blockDim.x + tx); //Pvalue stores the Pd element that is computed by the thread int Pvalue = 0; for (int k = 0; k < Width; k++) { Pvalue += Md[row * Width + k] * Nd[k * Width + col]; } __syncthreads(); //Write the matrix to device memory each thread writes one element Pd[row * Width + col] = Pvalue; } I think the problem may have something to do with memory but I'm a bit lost. What should I do to make this code work across several blocks?

    Read the article

  • python interactive mode module import issue

    - by Jeff
    I believe I have what would be called a scope issue, perhaps name space. Not too sure I'm new to python. I'm trying to make a module that will search through a list using regular expressions. I'm sure there is a better way of doing it but this error that I'm getting is bugging me and I want to understand why. here's my code: class relist(list): def __init__(self, l): list.__init__(self, l) def __getitem__(self, rexp): r = re.compile(rexp) res = filter(r.match, self) return res if __name__ == '__main__': import re listl = [x+y for x in 'test string' for y in 'another string for testing'] print(listl) test = relist(listl) print('----------------------------------') print(test['[s.]']) When I run this code through the command line it works the way I expect it to; however when I run it through python interactive mode I get the error >>> test['[s.]'] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "relist.py", line 8, in __getitem__ r = re.compile(rexp) NameError: global name 're' is not defined While in the interactive mode I do import re and I am able to use the re functions, but for some reason when I'm trying to execute the module it doesn't work. Do I need to import re into the scope of the class? I wouldn't think so because doesn't python search through other scopes if it's not found in the current one? I appreciate your help, and if there is a better way of doing this search I would be interested in knowing. Thanks

    Read the article

  • Best solution for __autoload

    - by tpk
    As our PHP5 OO application grew (in both size and traffic), we decided to revisit the __autoload() strategy. We always name the file by the class definition it contains, so class Customer would be contained within Customer.php. We used to list the directories in which a file can potentially exist, until the right .php file was found. This is quite inefficient, because you're potentially going through a number of directories which you don't need to, and doing so on every request (thus, making loads of stat() calls). Solutions that come to my mind... -use a naming convention that dictates the directory name (similar to PEAR). Disadvantages: doesn't scale too great, resulting in horrible class names. -come up with some kind of pre-built array of the locations (propel does this for its __autoload). Disadvantage: requires a rebuild before any deploy of new code. -build the array "on the fly" and cache it. This seems to be the best solution, as it allows for any class names and directory structure you want, and is fully flexible in that new files just get added to the list. The concerns are: where to store it and what about deleted/moved files. For storage we chose APC, as it doesn't have the disk I/O overhead. With regards to file deletes, it doesn't matter, as you probably don't wanna require them anywhere anyway. As to moves... that's unresolved (we ignore it as historically it didn't happen very often for us). Any other solutions?

    Read the article

  • XmlSerializer construction with same named extra types

    - by NoizWaves
    Hey, I am hitting trouble constructing an XmlSerializer where the extra types contains types with the same Name (but unique Fullname). Below is an example that illustrated my scenario. Type definitions in external assembly I cannot manipulate: public static class Wheel { public enum Status { Stopped, Spinning } } public static class Engine { public enum Status { Idle, Full } } Class I have written and have control over: public class Car { public Wheel.Status WheelStatus; public Engine.Status EngineStatus; public static string Serialize(Car car) { var xs = new XmlSerializer(typeof(Car), new[] {typeof(Wheel.Status),typeof(Engine.Status)}); var output = new StringBuilder(); using (var sw = new StringWriter(output)) xs.Serialize(sw, car); return output.ToString(); } } The XmlSerializer constructor throws a System.InvalidOperationException with Message "There was an error reflecting type 'Engine.Status'" This exception has an InnerException of type System.InvalidOperationException and with Message "Types 'Wheel.Status' and 'Engine.Status' both use the XML type name, 'Status', from namespace ''. Use XML attributes to specify a unique XML name and/or namespace for the type." Given that I am unable to alter the enum types, how can I construct an XmlSerializer that will serialize Car successfully?

    Read the article

  • PHP Infine Loope Problem

    - by Ashwin
    function httpGet( $url, $followRedirects=true ) { global $final_url; $url_parsed = parse_url($url); if ( empty($url_parsed['scheme']) ) { $url_parsed = parse_url('http://'.$url); } $final_url = $url_parsed; $port = $url_parsed["port"]; if ( !$port ) { $port = 80; } $rtn['url']['port'] = $port; $path = $url_parsed["path"]; if ( empty($path) ) { $path="/"; } if ( !empty($url_parsed["query"]) ) { $path .= "?".$url_parsed["query"]; } $rtn['url']['path'] = $path; $host = $url_parsed["host"]; $foundBody = false; $out = "GET $path HTTP/1.0\r\n"; $out .= "Host: $host\r\n"; $out .= "User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\r\n"; $out .= "Connection: Close\r\n\r\n"; if ( !$fp = @fsockopen($host, $port, $errno, $errstr, 30) ) { $rtn['errornumber'] = $errno; $rtn['errorstring'] = $errstr; } fwrite($fp, $out); while (!@feof($fp)) { $s = @fgets($fp, 128); if ( $s == "\r\n" ) { $foundBody = true; continue; } if ( $foundBody ) { $body .= $s; } else { if ( ($followRedirects) && (stristr($s, "location:") != false) ) { $redirect = preg_replace("/location:/i", "", $s); return httpGet( trim($redirect) ); } $header .= $s; } } fclose($fp); return(trim($body)); } This code sometimes go infinite loop. What's wrong here?

    Read the article

  • If Statements Skipping or Evaluating Strangely, JavaScript and jquery

    - by tlm2021
    So in jQuery, I have a global variable "currentSubNav" that stores a current visible element. The following code executes on "mouseenter". I need it to get store element's ID, check to see if there was one. If there wasn't, set the new visible element to the default. $('#mainMenu a').mouseenter(function() { var newName = $(this).attr("id"); if(newName == ''){ var newName = "default"; } Then it checks to see if the new element matches the current one. If so, it returns. If not, it performs the animations to bring in the new one. if(newName == currentSubNav){ return; }else{ $("div[name=" + currentSubNav + "]").animate({"left": "+=600px", "opacity": "toggle"}, "slow"); $("div[name=" + newName + "]").css({"margin-top": "0"}); $("div[name=" + newName + "]").fadeIn(2000); $("div[name=" + currentSubNav + "]").animate({"left": "-=600px"}, 0); currentSubNav = newName; return; } }); I'm using Chrome at the moment, and according to the dev tools that isn't what happens. Problem #1 "$(this).attr("id");" isn't returning undefined as the documentation claims. It seems to be returning "". BUT, when I have the if statement as I do above, it skips over the statement entirely. I set a breakpoint, but it never pauses execuation, so the statement is never evaluated. Problem #2 After the animations occur, instead of using the return at the end of the statements it goes back and uses the return for the "newName == currentSubNav" if statement. I guess that not a big deal, but it's not the intended behavior. I'm fairly new to JavaScript, and it appears I'm missing something about how JavaScript works. But I can't find what anywhere. Any help? Blockquote

    Read the article

  • What rules govern cross-version compatibility for .NET applications and the C# language?

    - by John Feminella
    For some reason I've always had trouble remembering the backwards/forwards compatibility guarantees made by the framework, so I'd like to put that to bed forever. Suppose I have two assemblies, A and B. A is older and references .NET 2.0 assemblies; B references .NET 3.5 assemblies. I have the source for A and B, Ax and Bx, respectively; they are written in C# at the 2.0 and 3.0 language levels. (That is, Ax uses no features that were introduced later than C# 2.0; likewise Bx uses no features that were introduced later than 3.0.) I have two environments, C and D. C has the .NET 2.0 framework installed; D has the .NET 3.5 framework installed. Now, which of the following can/can't I do? Running: run A on C? run A on D? run B on C? run C on D? Compiling: compile Ax on C? compile Ax on D? compile Bx on C? compile Bx on D? Rewriting: rewrite Ax to use features from the C# 3 language level, and compile it on D, while having it still work on C? rewrite Bx to use features from the C# 4 language level on another environment E that has .NET 4, while having it still work on D?' Referencing from another assembly: reference B from A and have a client app on C use it? reference B from A and have a client app on D use it? reference A from B and have a client app on C use it? reference A from B and have a client app on D use it? More importantly, what rules govern the truth or falsity of these hypothetical scenarios?

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • Can this line of code really throw an IndexOutOfRange exception?

    - by Jonathan M
    I am getting an IndexOutOfRange exception on the following line of code: var searchLastCriteria = (SearchCriteria)Session.GetSafely(WebConstants.KeyNames.SEARCH_LAST_CRITERIA); I'll explain the above here: SearchCriteria is an Enum with only two values Session is the HttpSessionState GetSafely is an extension method that looks like this: public static object GetSafely(this HttpSessionState source, string key) { try { return source[key]; } catch (Exception exc) { log.Info(exc); return null; } } WebConstants.KeyNames.SEARCH_LAST_CRITERIA is simply a constant I've tried everything to replicate this error, but I cannot reproduce it. I am beginning to think the stack trace is wrong. I thought perhaps the exception was actually coming from the GetSafely call, but it is swallowing the exceptions, so that can't be the case, and even if it was, it should show up in the stack trace. Is there anything in the line of code above that could possible throw an IndexOutOfRange exception? I know the line will throw an NullReferenceException if GetSafely returns null, and it will also throw an InvalidCastException if it returns anything that cannot be cast to SearchCriteria, but an IndexOutOfRange exception? I'm scratching my head here. Here is the stack trace: $LOG--> 2010-06-11 07:01:33,814 [ERROR] SERVERA (14) Web.Global - Index was outside the bounds of the array. System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.IndexOutOfRangeException: Index was outside the bounds of the array. at IterateSearchResult(Boolean next) in C:\Projects\Web\UserControls\AccountHeader.ascx.cs:line 242 at nextAccountLink_Click(Object sender, EventArgs e) in C:\Projects\Web\UserControls\AccountHeader.ascx.cs:line 232

    Read the article

  • Creating a function to grab data from an Oracle database (array by ID)

    - by Nick
    I'm trying to create a function that will simply allow me to pass an SQL statement into it, and it will generate an array based on a unique ID I pass it: function oracleGetGata($query, $id="id") { global $conn; $sql = OCI_Parse($conn, $query); OCI_Execute($sql); OCI_Fetch_All($sql, $results, null, null, OCI_FETCHSTATEMENT_BY_ROW); return $results; }   For example I'd like this query $array = oracleGetData('select * from table') to return something like: [1] => Array ( [Title] => Title 1 [Description] => Description 1 ) [2] => Array ( [Title] => Title 2 [Description] => Description 2 ) [3] => Array ( [Title] => Title 3 [Description] => Description 3 )   Rather than what it's returning at the moment: [0] => Array ( [ID] => 3 [TITLE] => Title 3 [DESCRIPTION] => Description 3 ) [1] => Array ( [ID] => 1 [TITLE] => Title 1 [DESCRIPTION] => Description 1 ) [2] => Array ( [ID] => 2 [TITLE] => Title 2 [DESCRIPTION] => Description 2 )   I'd really appreciate any help with this, as the function would save me lots of time! Thank you.

    Read the article

  • PNG Textures not loading on HTC desire

    - by Matthew Tatum
    Hi I'm developing a game for android using OpenGL es and have hit a problem: My game loads fine in the emulator (windows xp and vista from eclipse), it also loads fine on a T-Mobile G2 (HTC Hero) however when I load it on my new HTC Desire none of the textures appear to load correctly (or at all). I'm suspecting the BitmapFactory.decode method although I have no evidence that that is the problem. All of my textures are power of 2 and JPG textures seem to load (although they don't look great quality) but anything that is GIF or PNG just doesn't load at all except for a 2x2 red square which loads fine and one texture that maps to a 3d object but seems to fill each triangle of the mesh with the nearest colour). This is my code for loading images: AssetManager am = androidContext.getAssets(); BufferedInputStream is = null; try { is = new BufferedInputStream(am.open(fileName)); Bitmap bitmap; bitmap = BitmapFactory.decodeStream(is); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); } catch(IOException e) { Logger.global.log(Level.SEVERE, e.getLocalizedMessage()); } finally { try { is.close(); } catch(Exception e) { // Ignore. } } thanks

    Read the article

< Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >