Search Results

Search found 68825 results on 2753 pages for 'problem'.

Page 497/2753 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • XSP2 crashes serving static images

    - by Dawkins
    Hi, Requesting a simple HTML page with a jpg image makes XSP2 crash. If I remove the image from the HTML then the page is served OK all the time. The version is XSP2 2.0 mono 2.6.1. the version 2.4.2.2 in the same machine works fine. I have tested it in two different machines, both Windows Vista Business SP1. Anyone has experienced the same? Any clue of what can be the problem? Below is the stack trace displayed by the console: (The line in Spanish says "it has been forced the interruption of an existing connection by the remote host") EDIT: since another user is having the same problem I have submited a bug to Novell and created a litle zip to reproduce the problem: https://bugzilla.novell.com/show_bug.cgi?id=582162 Peer unexpectedly closed the connection on write. Closing our end. System.IO.IOException: Write failure ---> System.Net.Sockets.SocketException: Se ha forzado la interrupción de una conexión existente por el host remoto. at System.Net.Sockets.Socket.Send (System.Byte[] buf, Int32 offset, Int32 size , SocketFlags flags) [0x00000] in <filename unknown>:0 at System.Net.Sockets.NetworkStream.Write (System.Byte[] buffer, Int32 offset, Int32 size) [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Write (System.Byte[] buffer, Int32 offset, Int32 size) [0x00000] in <filename unknown>:0 at Mono.WebServer.XSPWorker.Write (System.Byte[] buffer, Int32 position, Int32 size) [0x00000] in <filename unknown>:0 Peer unexpectedly closed the connection on write. Closing our end. System.ObjectDisposedException: The object was used after being disposed. at System.Net.Sockets.NetworkStream.CheckDisposed () [0x00000] in <filename un known>:0 at System.Net.Sockets.NetworkStream.Write (System.Byte[] buffer, Int32 offset, Int32 size) [0x00000] in <filename unknown>:0 at Mono.WebServer.XSPWorker.Write (System.Byte[] buffer, Int32 position, Int32 size) [0x00000] in <filename unknown>:0 Thank you.

    Read the article

  • GoDaddy Subdomain Hosting Issue/Question with Disk Access (C#/ASP.NET 3.5)

    - by Vogel
    This isn't a very complicated scenario really, but as I start to type out the problem I'm realizing how convoluted it can become textually. Let me try and be very clear: First, the set up... I have a C#/ASP.NET web application that is publicly facing on my main domain (www), let's call it www.mysite.com. Nothing fancy, just a front-end that connects to SQL to display records. Then, I have a second C#/ASP.NET web application that is secured using forms authentication running on a subdomain, let's call it admin.mysite.com. This is a very light-weight CMS system to administer the public site. Now, the problem... Both of these sites run fine for basic tasks, however, my problem arises when I try to gain access to the file system for uploading. GoDaddy requires subdomains to run as a virtual directories under the main application in IIS (so the subdomains actually resolve/re-direct to www.mysite.com/admin when you type in admin.mysite.com), but because of this I am unable to write to my website root from the subfolder. Let me explain a little more... The CMS system (running as a virtual directory) gives the admin the ability to upload photos for display on the main site, the target folder of which is www.mysite.com/images - when attempting disk access from the root app, I am able to write to the virtual directory, but cannot do the opposite -- that is, write to the root from the virtual directory, getting security violations. If I can only upload to the /admin/ virtual directory, the entire point is moot because it's a secured folder that the public can't see! The only solution I can think of is to upload the files to the /admin/ virtual directory, then call a URL in the root that moves files from /admin/ back to the root, but that is entirely ghetto. I hope this post makes sense. Anyone else experience anything like this? The bottom line is that it seems virtual directories ONLY have access to themselves, and not their parent directories, no matter what credentials are used. Thanks!

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • ASP.NET RadioButton messing with the name (groupname)

    - by Hojou
    I got a templated control (a repeater) listing some text and other markup. Each item has a radiobutton associated with it, making it possible for the user to select ONE of the items created by the repeater. The repeater writes the radiobutton setting its id and name generated with the default asp.net naming convention making each radiobutton a full 'group'. This means all radiobuttons are independant on each other, which again unfortunately means i can select all radiobuttons at the same time. The radiobutton has the clever attribute 'groupname' used to set a common name so they get grouped together and thus should be dependant (so i can only select one at a time). The problem is - this doesn't work - the repeater makes sure the id and thus the name (which controls the grouping) are different. Since i use a repeater (could have been a listview or any other templated databound control) i can't use the RadioButtonList. So where does that leave me? I know i've had this problem before and solved it. I know almost every asp.net programmer must have had it too, so why can't i google and find a solid solution to the problem? I came across solutions to enforce the grouping by javascript (ugly!) or even to handle the radiobuttons as non-server controls, forcing me to do a Request.Form[name] to read the status. I also tried experimenting with overriding the name attribute on the PreRender event - unfortunately the owning page and masterpage again overrides this name to reflect the full id/name so i end up with the same wrong result. If you have no better solution than i posted, you are still very welcome to post your thoughts - atleast i'll know that my friend 'jack' is right about how messed up 'asp.net' is sometimes ;)

    Read the article

  • Nusphere PHPEd: PHP Function Hints Lost Arguments?

    - by Eli
    Hi All, My PHPEd suddenly stopped showing arguments and arg order in the hints, and now just shows a basic description of the function. Before I go digging around in the config files, has anyone else had this problem? Thanks! Edit: Sorry, I may not have been entirely clear on this. There is no problem with my own classes, only with the actual php functions. Example: How it used to work: I type a PHP function, say strpos. As soon as I type the '(' at the end of it, I get the little yellow box, showing something like this: int strpos ( string $haystack , mixed $needle [, int $offset=0 ] ) with the first argument bold. If I type it, and then a comma, it bolds the second arg, and so on. This is really nice, since PHP functions are a bit scrambled as far as argument order, and I don't have to look them up every time. How it works now: I type a php function, say strpos. As soon as I type the '(' at the end of it, I get the little yellow box. It says something like "strpos - Returns the numeric position of the first occurrence of needle in the haystack string." There are no arguments shown, which makes the little box basically worthless - I know what strpos does, I just want a reminder of the argument order. I think this may be a problem with the included PHPDoc, which I never use, but may be the source of the data for the hint box. I did recently upgrade to 5.6, but ended up removing it and restoring 5.2. I installed to a different folder, and uninstalled from there, but it may have overwritten something in the original folder? I'm using v5.2 (5220). Thanks!

    Read the article

  • razor websites not working and all dlls are present

    - by Michael Tot Korsgaard
    I've uploaded a .cshtml website to a surftown server, and I got some problems running the code. But I have a problem with it running the Razor code. This is how the page renders:(Default.cshtml) I've already checked for internal communication problems. And this is my result: But then why isn't it working, and how can I fix it? I've heard that it can be a problem with views but how whould I fix this if that's the case? My websites folder tree: (And some files too) App_Code App_Data packages Microsoft.AspNet.Razor.2.0.20710.0 Microsoft.Asp.Net.WebPages.2.0.20710.0 Microsoft.Asp.Net.WebPages.Administration.2.0.20710.0 Microsoft.Asp.Net.WebPages.Data.2.0.20710.0 Microsoft.Asp.Net.WebPages.WebData.2.0.20710.0 Microsoft.Web.Infrastructure.1.0.0.0 NuGet.Core.1.6.2 bin packages jQuery.2.0.3 Content Scripts Tools Microsoft.AspNet.Mvc.4.0.30506.0 lib net40 Microsoft.AspNet.Razor.2.0.30506.0 lib net40 Microsoft.AspNet.WebPages.2.0.30506.0 lib net40 Pages Chapters Read.cshtml Edit Move Chapter.cshtml Entry.cshtml Entries EnterEntry.cshtml EnterNote.cshtml Login Login.cshtml Search Result.cshtml Scripts Addons TinyMCE Styles CSS Views _Layout.cshtml Default.cshtml My web.config file looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0"> <buildProviders> <add extension=".cshtml" type="System.Web.WebPages.Razor.RazorBuildProvider, System.Web.WebPages.Razor"/> </buildProviders> <assemblies> <add assembly="System.Web.Mvc, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </assemblies> </compilation> </system.web> <connectionStrings> <add connectionString="database connection" providerName="System.Data.SqlClient"/> </connectionStrings> </configuration> EDIT: Is it a problem that all my files are .cshtml?

    Read the article

  • Dot Matrix printing in C# ?

    - by Dale
    I'm trying to print to Dot Matrix printers (various models) out of C#, currently I'm using Win32 API (you can find alot of examples online) calls to send escape codes directly to the printer out of my C# application. This works great, but... My problem is because I'm generating the escape codes and not relying on the windows print system the printouts can't be sent to any "normal" printers or to things like PDF print drivers. (This is now causing a problem as we're trying to use the application on a 2008 Terminal Server using Easy Print [Which is XPS based]) The question is: How can I print formatted documents (invoices on pre-printed stationary) to Dot Matrix printers (Epson, Oki and Panasonic... various models) out of C# not using direct printing, escape codes etc. **Just to clarify, I'm trying things like GDI+ (System.Drawing.Printing) but the problem is that its very hard, to get things to line up like the old code did. (The old code sent the characters direct to the printer bypassing the windows driver.) Any suggestions how things could be improved so that they could use GDI+ but still line up like the old code did?

    Read the article

  • Getting a connection from a Sybase datasource in WAS 6.1 fails with message "User name property miss

    - by Abel Morelos
    I have a standalone application that needs to connect to a Sybase database via a datasource, I'm trying to connect using getConnection() and get the connection from this Sybase datasource which is hosted in WAS 6.1, sadly I'm getting an error JZ004 - Sybase(R) jConnect for JDBC(TM) Programmer's Reference: SQL Exception and Warning Messages JZ004 error message is: User name property missing in DriverManager.getConnection(..., Properties) Action: Provide the required user property. As you can see, this is not a connectivity (so we can discard JNDI or lookup problems), but rather a configuration problem. For my Sybase datasource in WAS 6.1 I have set up the proper authentication alias (Component-managed Authentication Alias), and I know the credentials are alright, "Test Connection" is successful for this datasource. Somebody had a similar problem and was because of the authentication alias- http://forum.springsource.org/showthread.php?t=39915 Next, I tried calling getConnection() but now I provided the credentials like getConnection(user, password)... and this time it worked!!! So I suspect that somehow WAS 6.1 is not picking or taking the authentication info I set in the datasource as mentioned before. If you think that maybe getConnection(user, password) should be OK for my case, well, that's not the case since I have a requirement to keep the credentials in the server, the standalone application only needs to know the JNDI information to lookup the datasource. Please let me know if have faced a similar problem, or what would you suggest me to do. Thanks.

    Read the article

  • CreateDelegate with unknown types

    - by Giorgi
    Hello, I am trying to create Delegate for reading/writing properties of unknown type of class at runtime. I have a generic class Main<T> and a method which looks like this: Delegate.CreateDelegate(typeof(Func<T, object>), get) where get is a MethodInfo of the property that should be read. The problem is that when the property returns int (I guess this happens for value types) the above code throws ArgumentException because the method cannot be bound. In case of string it works well. To solve the problem I changed the code so that corresponding Delegate type is generated by using MakeGenericType. So now the code is: Type func = typeof(Func<,>); Type generic = func.MakeGenericType(typeof(T), get.ReturnType); var result = Delegate.CreateDelegate(generic, get) The problem now is that the created delegate instance of generic so I have to use DynamicInvoke which would be as slow as using pure reflection to read the field. So my question is why is that the first snippet of code fails with value types. According to MSDN it should work as it says that The return type of a delegate is compatible with the return type of a method if the return type of the method is more restrictive than the return type of the delegate and how to execute the delegate in the second snippet so that it is faster than reflection. Thanks.

    Read the article

  • Cannot upload media via Wordpress uploader

    - by Justin Johnson
    This has to do with media uploading in Wordpress. Every time WP creates a folder for new uploads (it organizes uploads by year and month: yyyy/mm), it creates it with the "apache:apache' user and group, with full access to all (777 or drwxrwxrwx). However, after that, WP cannot create a folder within that folder (e.g.: mkdir 2011 succeeds, but mkdir 2011/01 fails). Also, uploads cannot be moved into these newly created folders even though the permissions are 777 (rwxrwxrwx). Once a month, I have to chown the newly created folders to be the same as user:group as the rest of the files. Once I do that, uploading works fine (which doesn't make sense to me The really frustrating part is that this problem doesn't exist in other WP installs on other domains on the same server. * I wasn't sure if this should be here or on serverfault. Edit: The containing directory /.../httpdocs/blog/wp-content/uploads has the correct ownership drwxrwxrwx 5 myuser psaserv 4096 Jun 3 18:38 uploads This is a Plesk/CentOS environment hosted by Media Temple (dv). I've written the following test script to simulate the problem <pre><?php $d = "d" . mt_rand(100, 500); var_dump( get_current_user(), $d, mkdir($d), chmod($d, 0777), mkdir("$d/$d"), chmod("$d/$d", 0777), fileowner($d), getmyuid() ); The script always creates the first directory mkdir($d) successfully. On domain A, where the WP problem is, it cannot create the nested directory mkdir("$d/$d"). However, on domain B, both directories are successfully created. I am running each script at /var/www/vhosts/domainA/httpdocs/tmp/t.php and /var/www/vhosts/domainB/httpdocs/tmp/t.php respectively I checked the permissions on tmp, httpdocs, and domain[AB] and they are the same for each path. The only thing that differs is the user.

    Read the article

  • Issue in creating Zip file using glob.glob

    - by infosyssec
    Hi, I am creating a Zip file from a folder (and subfolders). it works fine and creates a new .zip file also but I am having an issue while using glob.glob. It is reading all files from the desired folder (source folder) and writing to the new zip file but the problem is that it is, however, adding subdirectories, but not adding files form the subdirectories. I am giving user an option to select the filename and path as well as filetype also (Zip or Tar). I don;t get any problem while creating .tar.gz file, but when use creates .zip file, this problem comes across. Here is my code: for name in (Source_Dir): for name in glob.glob("/path/to/source/dir/*" ): myZip.write(name, os.path.basename(name), zipfile.ZIP_DEFLATED) myZip.close() Also, if I use code below: for dirpath, dirnames, filenames in os.walk(Source_Dir): myZip.write(os.path.join(dirpath, filename) os.path.basename(filename)) myZip.close() Now the 2nd code taks all files even if it inside the folder/ subfolders, creates a new .zip file and write to it without any directory strucure. It even does not take dir structure for main folder and simply write all files from main dir or subdir to that .zip file. Can anyone please help me or suggest me. I would prefer glob.glob rather than the 2nd option to use. Thanks in advance. Regards, Akash

    Read the article

  • Sql Serve - Cascade delete has multiple paths

    - by Anders Juul
    Hi all, I have two tables, Results and ComparedResults. ComparedResults has two columns which reference the primary key of the Results table. My problem is that if a record in Results is deleted, I wish to delete all records in ComparedResults which reference the deleted record, regardless of whether it's one column or the other (and the columns may reference the same Results row). A row in Results may deleted directly or through cascade delete caused by deleting in a third table. Googling this could indicate that I need to disable cascade delete and rewrite all cascade deletes to use triggers instead. Is that REALLY nessesary? I'd be prepared to do much restructuring of the database to avoid this, as my main area is OO programming, and databases should 'just work'. It is hard to see, however, how a restructuring could help as I would just move the problem around... Or am I missing something? I am also a bit at a loss as to why my initial construct should even be a problem for the Sql Server?! Any comments welcome and much appreciated! Anders, Denmark

    Read the article

  • regex php: find everything in div

    - by Fifth-Edition
    I'm trying to find eveything inside a div using regexp. I'm aware that there probably is a smarter way to do this - but I've chosen regexp. so currently my regexp pattern looks like this: $gallery_pattern = '/([\s\S]*)<\/div/'; And it does the trick - somewhat. the problem is if i have two divs after each other - like this. <div class="gallery">text to extract here</div> <div class="gallery">text to extract from here as well</div> I want to extract the information from both divs, but my problem, when testing, is that im not getting the text in between as a result but instead: "text to extract here text to extract from here as well" So . to sum up. It skips the first end of the div. and continues on to the next. The text inside the div can contain "<", "/" and linebreaks. just so you know! Does anyone have a simple solution to this problem? Im still a regexp novice.

    Read the article

  • Subclassing NSArrayController in order to limit size of arrangedObjects

    - by Simone Manganelli
    I'm trying to limit the number of objects in an array controller, but I still want to be able to access the full array, if necessary. A simple solution I came up with was to subclass NSArrayController, and define a new method named "limitedArrangedObjects", that returns a limited number of objects from the real set of arranged objects. (I've seen http://stackoverflow.com/questions/694493/limiting-the-number-of-objects-in-nsarraycontroller , but that doesn't address my problem.) I want this property to be observable via bindings, so I set a dependency to arrangedObjects on it. Problem is, when arrangedObjects is updated, limitedArrangedObjects seems not to be observing the value change in arrangedObjects. I've hooked up an NSCollectionView to limitedArrangedObjects, and zero objects are being displayed. (If I bind it to arrangedObjects instead, all the objects show up as expected.) What's the problem? Here's the relevant code: @property (readonly) NSArray *limitedArrangedObjects; - (NSArray *)limitedArrangedObjects; { NSArray *arrangedObjects = [super arrangedObjects]; NSUInteger upperLimit = 10000; NSUInteger count = [arrangedObjects count]; if (count > upperLimit) count = upperLimit; arrayToReturn = [arrangedObjects subarrayWithRange:NSMakeRange(0, count)]; return arrayToReturn; } + (NSSet *)keyPathsForValuesAffectingValueForKey:(NSString *)key; { NSSet *keyPaths = [super keyPathsForValuesAffectingValueForKey:key]; if ([key isEqualToString:@"limitedArrangedObjects"]) { NSSet *affectingKeys = [NSSet setWithObjects:@"arrangedObjects",nil]; keyPaths = [keyPaths setByAddingObjectsFromSet:affectingKeys]; } return keyPaths; }

    Read the article

  • Page_PreRender fires twice on first load in session

    - by awe
    I have an issue when I access the application, I notice that Page_PreRender is fired twice. This only happens the first time in a new session. It does not happen if I refresh the page, or on postbacks. I use .NET framework 3.5 and the built in ajax functionality. I think the problem is not related to img tag with empty src attribute as I have seen other posts has mentioned, because I see this in both FireFox and IE. The posts I saw about this stated that this was not a problem in IE. I have also searched and found no img tags with empty src in the generated page source, so it should not be this. I have also made a simple test page where I have included some of the functionality, and this does not happen. Here I have also tried to reproduce the empty src bug by including <img src="" /> on the page, but this does not trigger this problem. Have anyone any suggestions on what happens? Note: It is the entire page cycle that is firing twice, not just render.

    Read the article

  • Rails emails not sending in staging environment

    - by jrdioko
    In a Rails application I set up a new staging environment with the following parameters in its environments/ file: config.action_mailer.perform_deliveries = true config.action_mailer.raise_delivery_errors = true config.action_mailer.delivery_method = :smtp However, when the system generates an email, it gets printed to the staging.log file instead of being sent. My SMTP settings work fine in other environments. What configuration am I missing to get the emails to actually send? Edit: Yes, the staging box is set up with valid configuration for an SMTP server it has access to. It seems like the problem isn't with the SMTP settings (if it was, wouldn't I get errors in the logs?), but with the Rails configuration. The application is still redirecting emails to the log file (saying "Sent mail: ...") as opposed to actually going through SMTP. Edit #2: It looks like the emails actually have been sending correctly, they just happen to print to the log as well. I'm trying to use the sanitize_email gem to redirect the mail to another address, and that doesn't seem to be working, which is why I thought the emails weren't going out. So I think that solves my problem, although I'm still curious what in ActionMailer's settings controls whether emails are sent, logged to the log file, or both. Edit #3: The problem with sanitize_email boiled down to me needing to add the new staging environment to ActionMailer::Base.local_environments. I'll keep this question open to see if anyone can answer my last question (what determines whether ActionMailer's emails get sent out, logged to the log file, or both?)

    Read the article

  • Http Digest Authentication, Handle different browser char-sets...

    - by user160561
    Hi all, I tried to use the Http Authentication Digest Scheme with my php (apache module) based website. In general it works fine, but when it comes to verification of the username / hash against my user database i run into a problem. Of course i do not want to store the user´s password in my database, so i tend to store the A1 hashvalue (which is md5($username . ':' . $realm . ':' . $password)) in my db. This is just how the browser does it too to create the hashes to send back. The Problem: I am not able to detect if the browser does this in ISO-8859-1 fallback (like firefox, IE) or UTF-8 (Opera) or whatever. I have chosen to do the calculation in UTF-8 and store this md5 hash. Which leads to non-authentication in Firefox and IE browsers. How do you solve this problem? Just do not use this auth-scheme? Or Store a md5 Hash for each charset? Force users to Opera? (Terms of A1 refer to the http://php.net/manual/en/features.http-auth.php example.) (for digest access authentication read the according wikipedia entry)

    Read the article

  • C# visually subclass datagridview control VS2005

    - by DRapp
    Maybe its something stupid, but I'm having a problem with a subclass of a DataGridView Control in VS2005 C#. I know I can subclass from almost anything by doing public class MyDataGridView : DataGridView {} no problem, and I put in some things / elements I want applicable globally. Now, I take this gridview and put into a custom user control that will contain other controls too. So I have something like created by the visual designer. I grab some buttons, label, and my derived "MyDataGridView" on it. public partial class MyCompoundDGVPlus : UserControl So, now, I can visually draw, move, change all sorts of settings as needed, no problem. Now, I want this "MyCompoundDGVPlus" class as the basis for other classes, of which I will manipulate settings specific, but want all to have the same look / feel, and otherwise similar flow, hence the derivations. I've even set the "modifiers" setting to public, so I SHOULD be able to modify any of the properties of the controls at any derived level. So, now, I create a new subclass of "MyFirstDetailedDGVPlus" derived from "MyCompoundDGVPlus". Ok visually, all the label, button, datagridview appear. However, now I want to specifically define the columns of the datagridview here in this class visually, but its locked. However, the LABEL on the form, I CAN get all the property settings.... What am I missing.

    Read the article

  • How is a relative JMP (x86) implemented in an Assembler?

    - by Pindatjuh
    While building my assembler for the x86 platform I encountered some problems with encoding the JMP instruction: enc inst size in bytes EB cb JMP rel8 2 E9 cw JMP rel16 4 (because of 0x66 16-bit prefix) E9 cd JMP rel32 5 ... (from my favourite x86 instruction website, http://siyobik.info/index.php?module=x86&id=147) All are relative jumps, where the size of each encoding (operation + operand) is in the third column. Now my original (and thus fault because of this) design reserved the maximum (5 bytes) space for each instruction. The operand is not yet known, because it's a jump to a yet unknown location. So I've implemented a "rewrite" mechanism, that rewrites the operands in the correct location in memory, if the location of the jump is known, and fills the rest with NOPs. This is a somewhat serious concern in tight-loops. Now my problem is with the following situation: b: XXX c: JMP a e: XXX ... XXX d: JMP b a: XXX (where XXX is any instruction, depending on the to-be assembled program) The problem is that I want the smallest possible encoding for a JMP instruction (and no NOP filling). I have to know the size of the instruction at c before I can calculate the relative distance between a and b for the operand at d. The same applies for the JMP at c: it needs to know the size of d before it can calculate the relative distance between e and a. How do existing assemblers implement this, or how would you implement this? This is what I am thinking which solves the problem: First encode all the instructions to opcodes between the JMP and it's target, and if this region contains a variable-sized opcode, use the maximum size, i.e. 5 for JMP. Then in some conditions, the JMP is oversized (because it may fit in a smaller encoding): so another pass will search for oversized JMPs, shrink them, and move all instructions ahead), and set absolute branching instructions (i.e. external CALLs) after this pass is completed. I wonder, perhaps this is an over-engineered solution, that's why I ask this question.

    Read the article

  • 1 oracle schema support large reques per day , is this safe ?

    - by Hlex
    I 'm java system designer. As we have large project to do tightly, Those projects are java api without webpage. I design to create general flow engine to support all project. This idea use 1 oracle schema , having general transaction table . And others control routing table. They all nearly complete. But DBA Team concern that he is suffered to maintain very large request to 1 schema. 1 reason is if there are problem is some table. He must offline tablespace to fix. This is problem because all project will be affected. I try to convince by split data of each table to partition by project_code & "month number to delete" . Eaxmple partition: PROJ1_05 PROJ1_06 PROJ1_07 PROJ2_05 PROJ2_06 PROJ2_07 and all transaction table will store on its partition. So, If there are problem on any part of tablespace then he should offline some partition and another project with use same table should able to service Transaction per day should around 10Meg Record per day. Is this a good idea? If I must use 1 schema, what is strategy to do? Do you have any comment?

    Read the article

  • "É" not getting converted to two bytes correctly.

    - by ChrisF
    Further to this question I've got a supplementary problem. I've found a track with an "É" in the title. My code: var playList = new StreamWriter(playlist, false, Encoding.UTF8); - private static void WriteUTF8(StreamWriter playList, string output) { byte[] byteArray = Encoding.UTF8.GetBytes(output); foreach (byte b in byteArray) { playList.Write(Convert.ToChar(b)); } } converts this to the following bytes: 195 137 which is being output as à followed by a square (which is an character that can't be printed in the current font). I've exported the same file to a playlist in Media Monkey at it writes the "É" as "É" - which I'm assuming is correct (as KennyTM pointed out). My question is, how do I get the "‰" symbol output? Do I need to select a different font and if so which one? UPDATE People seem to be missing the point. I can get the "É" written to the file using playList.WriteLine("É"); that's not the problem. The problem is that Media Monkey requires the file to be in the following format: #EXTINFUTF8:140,Yann Tiersen - Comptine D'Un Autre Été: L'Après Midi #EXTINF:140,Yann Tiersen - Comptine D'Un Autre Été: L'Après Midi #UTF8:04-Comptine D'Un Autre Été- L'Après Midi.mp3 04-Comptine D'Un Autre Été- L'Après Midi.mp3 Where all the "high-ascii" (for want of a better term) are written out as a pair of characters.

    Read the article

  • Tomcat 4.1.31 - HTTPS not working intermittently | "Page Cannot be Displayed" problems

    - by cedar715
    We are facing this error intermittently. If we restart the server it works for some time and again the problem start. We also have another load balanced server with similar configuration and that is working fine. The server is running on Linux box. If we do the "ps -ef" its listing the TOMCAT process. URL : https://xyz.abc.com:9234/axis/servlet/AxisServlet Following the configuration in server.xml file. <Connector className="org.apache.coyote.tomcat4.CoyoteConnector" port="9234" minProcessors="5" maxProcessors="75" enableLookups="true" acceptCount="100" debug="0" scheme="https" secure="true" useURIValidationHack="false" disableUploadTimeout="true"> <Factory className="org.apache.coyote.tomcat4.CoyoteServerSocketFactory" clientAuth="false" protocol="TLS" /> </Connector> Is it the problem with our load balancer which is forwarding most requests to this server? Is it any way related to the "maxProcessors" or "acceptCount" attributes defined in the above configuration? Is it the problem with the port number?? Does it have to do any thing with the certificate. The certificate is generated using Java Keytool. ( However, the other load balanced server is also using the same certificate and working fine) Please suggest in resolving this issue. thanks

    Read the article

  • passing an array of structures (containing two mpz_t numbers) to a function

    - by jerome
    Hello, I'm working on some project where I use the type mpz_t from the GMP C library. I have some problems passing an array of structures (containing mpz_ts) adress to a function : I wille try to explain my problem with some code. So here is the structure : struct mpz_t2{ mpz_t a; mpz_t b; }; typedef struct mpz_t2 *mpz_t2; void petit_test(mpz_t2 *test[]) { printf("entering petit test function\n"); for (int i=0; i < 4; i++) { gmp_printf("test[%d]->a = %Zd and test[%d]->b = %Zd\n", test[i]->a, test[i]->b); } } /* IN MAIN FUNCTION */ mpz_t2 *test = malloc(4 * sizeof(mpz_t2 *)); for (int i=0; i < 4; i++) { mpz_t2_init(&test[i]); // if I pass test[i] : compiler error mpz_set_ui(test[i].a, i); //if test[i]->a compiler error mpz_set_ui(test[i].b, i*10); //same problem gmp_printf("%Zd\n", test[i].b); //prints correct result } petit_test(test); The programm prints the expected result (in main) but after entering the petit_test function produces a segmentation fault error. I would need to edit the mpz_t2 structure array in petit_test. I tried some other ways allocating and passing the array to the function but I didn't manage to get this right. If someone has a solution to this problem, I would be very thankfull! Regards, jérôme.

    Read the article

  • Visual Studio 2008 project file does not load because of an unexpected encoding change.

    - by Xenan
    In our team we have a database project in visual Studio 2008 which is under source control by Team Foundation Server. Every two weeks or so, after one co-worker checks in, the project file won't load on the other developers machines. The error message is: The project file could not be loaded. Data at the root level is invalid. Line 1, position 1. When I look at the project file in Notepad++, the file looks like this: ??<NUL?NULxNULmNULlNUL NULvNULeNULrNULsNULiNULoNULnNUL ... and so on (you can see <?xml version in this) whereas an normal project file looks like: <?xml version="1.0" encoding="utf-16"?> ... So probably something is wrong with the encoding of the file. This is a problem for us because it turns out to be impossible to get the file encoding correct again. The 'solution' is to throw away the project file an get the last know working version from source control. According to the file, the encoding should be UTF-16. According to Notepad++, the corrupted file is actually UTF-8. My questions are: Why is Visual Studio messing up the encoding of the project file, apparently at random times and at random machines? What should we do to prevent this? When it has happened, is there a possibility to restore the current file in the correct encoding instead of pulling an older version from source control? As a last note: the problem is with one single project file, all other project files don't expose this problem.

    Read the article

  • Space-saving character encoding for japanese?

    - by Constantin
    In my opinion a common problem: character encoding in combination with a bitmap-font. Most multi-language encodings have an huge space between different character types and even a lot of unused code points there. So if I want to use them I waste a lot of memory (not only for saving multi-byte text - i mean specially for spaces in my bitmap-font) - and VRAM is mostly really valuable... So the only reasonable thing seems to be: Using an custom mapping on my texture for i.e. UTF-8 characters (so that no space is waste). BUT: This effort seems to be same with use an own proprietary character encoding (so also own order of characters in my texture). In my specially case I got texture space for 4096 different characters and need characters to display latin languages as well as japanese (its a mess with utf-8 that only support generall cjk codepages). Had somebody ever a similiar problem (I really wonder, if not)? If theres already any approach? Edit: The same Problem is described here http://www.tonypottier.info/Unicode_And_Japanese_Kanji/ but it doesnt provide an real solution how to save these bitmapfont mappings to utf-8 space efficent. So any further help is welcome!

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >