Search Results

Search found 22641 results on 906 pages for 'case'.

Page 701/906 | < Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >

  • Why did this work with Visual C++, but not with gcc?

    - by Carlos Nunez
    I've been working on a senior project for the last several months now, and a major sticking point in our team's development process has been dealing wtih rifts between Visual-C++ and gcc. (Yes, I know we all should have had the same development environment.) Things are about finished up at this point, but I ran into a moderate bug just today that had me wondering whether Visual-C++ is easier on newbies (like me) by design. In one of my headers, there is a function that relies on strtok to chop up a string, do some comparisons and return a string with a similar format. It works a little something like the following: int main() { string a, b, c; //Do stuff with a and b. c = get_string(a,b); } string get_string(string a, string b) { const char * a_ch, b_ch; a_ch = strtok(a.c_str(),","); b_ch = strtok(b.c_str(),","); } strtok is infamous for being great at tokenizing, but equally great at destroying the original string to be tokenized. Thus, when I compiled this with gcc and tried to do anything with a or b, I got unexpected behavior, since the separator used was completely removed in the string. Here's an example in case I'm unclear; if I set a = "Jim,Bob,Mary" and b="Grace,Soo,Hyun", they would be defined as a="JimBobMary" and b="GraceSooHyun" instead of staying the same like I wanted. However, when I compiled this under Visual C++, I got back the original strings and the program executed fine. I tried dynamically allocating memory to the strings and copying them the "standard" way, but the only way that worked was using malloc() and free(), which I hear is discouraged in C++. While I'm curious about that, the real question I have is this: Why did the program work when compiled in VC++, but not with gcc? (This is one of many conflicts that I experienced while trying to make the code cross-platform.) Thanks in advance! -Carlos Nunez

    Read the article

  • Ibator didn't generate Oracle varchar2 field

    - by bugbug
    I have table APP_REQ_APPROVE_COMPARE with following fields: "ID" NUMBER NOT NULL ENABLE, "TRACK_NO" VARCHAR2(20 BYTE) NOT NULL ENABLE, "REQ_DATE" DATE NOT NULL ENABLE, "OFFCODE" CHAR(6 BYTE) NOT NULL ENABLE, "COMPARE_CASE_ID" NUMBER NOT NULL ENABLE, "VEHICLE_NAME" VARCHAR2(100 BYTE), "ENGINE_NO" VARCHAR2(100 BYTE), "BODY_NO" VARCHAR2(100 BYTE), "HOLD_SHIP" NUMBER, "OWNERSHIP" VARCHAR2(200 BYTE), "RENT_NAME" VARCHAR2(200 BYTE), "CONTRACT" VARCHAR2(100 BYTE), "CONTRACT_NO" VARCHAR2(100 BYTE), "CONTRACT_DATE" DATE, "ISLAWBREAKERRENT" CHAR(1 BYTE) NOT NULL ENABLE, "MISTAKE_DETAIL" VARCHAR2(4000 BYTE), "COMPARE_REASON" VARCHAR2(4000 BYTE), "CREATE_BY" NUMBER NOT NULL ENABLE, "CREATE_ON" DATE DEFAULT SYSDATE NOT NULL ENABLE, "UPDATE_BY" NUMBER, "UPDATE_ON" DATE, When I generate a java bean using Ibator , I didn't find trackNo, VehicalName, ... (all fields defined as varchar2). What is the problem in my case? Here is my Ibator configuration file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE ibatorConfiguration PUBLIC "-//Apache Software Foundation//DTD Apache iBATIS Ibator Configuration 1.0//EN" "http://ibatis.apache.org/dtd/ibator-config_1_0.dtd"> <ibatorConfiguration> <classPathEntry location="/dos/connector/oracle_jdbc.jar"/> <ibatorContext id="autoPerson" defaultModelType="flat" targetRuntime="Ibatis2Java2"> <jdbcConnection connectionURL="jdbc:oracle:thin:@192.168.42.144:1521:orcl" driverClass="oracle.jdbc.driver.OracleDriver" userId="user" password="password"/> <javaModelGenerator targetPackage="com.ko.model" targetProject="FormConfig"> <property name="enableSubPackages" value="true"/> <property name="trimStrings" value="true"/> </javaModelGenerator> <sqlMapGenerator targetPackage="com.ko.map" targetProject="FormConfig"> <property name="enableSubPackages" value="true"/> </sqlMapGenerator> <daoGenerator targetPackage="com.ko.model.dao" type="SPRING" targetProject="FormConfig" implementationPackage="com.ko.model.dao.impl" > <property name="enableSubPackges" value="true"/> <property name="methodNameCalculator" value="extended"/> </daoGenerator> <table tableName="APP_REQ_APPROVE_COMPARE" domainObjectName="AppReqApproveCompare"/> <ibatorConfiguration>

    Read the article

  • EOL Special Char not matching

    - by Aurélien Ribon
    Hello, I am trying to find every "a - b, c, d" pattern in an input string. The pattern I am using is the following : "^[ \t]*(\\w+)[ \t]*->[ \t]*(\\w+)((?:,[ \t]*\\w+)*)$" This pattern is a C# pattern, the "\t" refers to a tabulation (its a single escaped litteral, intepreted by the .NET String API), the "\w" refers to the well know regex litteral predefined class, double escaped to be interpreted as a "\w" by the .NET STring API, and then as a "WORD CLASS" by the .NET Regex API. The input is : a -> b b -> c c -> d The function is : private void ParseAndBuildGraph(String input) { MatchCollection mc = Regex.Matches(input, "^[ \t]*(\\w+)[ \t]*->[ \t]*(\\w+)((?:,[ \t]*\\w+)*)$", RegexOptions.Multiline); foreach (Match m in mc) { Debug.WriteLine(m.Value); } } The output is : c -> d Actually, there is a problem with the line ending "$" special char. If I insert a "\r" before "$", it works, but I thought "$" would match any line termination (with the Multiline option), especially a \r\n in a Windows environment. Is it not the case ?

    Read the article

  • Most efficient way to check for DBNull and then assign to a variable?

    - by ilitirit
    This question comes up occasionally but I haven't seen a satisfactory answer. A typical pattern is (row is a DataRow): if (row["value"] != DBNull.Value) { someObject.Member = row["value"]; } My first question is which is more efficient (I've flipped the condition): row["value"] == DBNull.Value; // Or row["value"] is DBNull; // Or row["value"].GetType() == typeof(DBNull) // Or... any suggestions? This indicates that .GetType() should be faster, but maybe the compiler knows a few tricks I don't? Second question, is it worth caching the value of row["value"] or does the compiler optimize the indexer away anyway? eg. object valueHolder; if (DBNull.Value == (valueHolder = row["value"])) {} Disclaimers: row["value"] exists. I don't know the column index of the column (hence the column name lookup) I'm asking specifically about checking for DBNull and then assignment (not about premature optimization etc). Edit: I benchmarked a few scenarios (time in seconds, 10000000 trials): row["value"] == DBNull.Value: 00:00:01.5478995 row["value"] is DBNull: 00:00:01.6306578 row["value"].GetType() == typeof(DBNull): 00:00:02.0138757 Object.ReferenceEquals has the same performance as "==" The most interesting result? If you mismatch the name of the column by case (eg. "Value" instead of "value", it takes roughly ten times longer (for a string): row["Value"] == DBNull.Value: 00:00:12.2792374 The moral of the story seems to be that if you can't look up a column by it's index, then ensure that the column name you feed to the indexer matches the DataColumn's name exactly. Caching the value also appears to be nearly twice as fast: No Caching: 00:00:03.0996622 With Caching: 00:00:01.5659920 So the most efficient method seems to be: object temp; string variable; if (DBNull.Value != (temp = row["value"]) { variable = temp.ToString(); } This was a good learning experience.

    Read the article

  • Javascript: Whitespace Characters being Removed in Chrome (but not Firefox)

    - by Matrym
    Why would the below eliminate the whitespace around matched keyword text when replacing it with an anchor link? Note, this error only occurs in Chrome, and not firefox. For complete context, the file is located at: http://seox.org/lbp/lb-core.js To view the code in action (no errors found yet), the demo page is at http://seox.org/test.html. Copy/Pasting the first paragraph into a rich text editor (ie: dreamweaver, or gmail with rich text editor turned on) will reveal the problem, with words bunched together. Pasting it into a plain text editor will not. // Find page text (not in links) -> doxdesk.com function findPlainTextExceptInLinks(element, substring, callback) { for (var childi= element.childNodes.length; childi-->0;) { var child= element.childNodes[childi]; if (child.nodeType===1) { if (child.tagName.toLowerCase()!=='a') findPlainTextExceptInLinks(child, substring, callback); } else if (child.nodeType===3) { var index= child.data.length; while (true) { index= child.data.lastIndexOf(substring, index); if (index===-1 || limit.indexOf(substring.toLowerCase()) !== -1) break; // don't match an alphanumeric char var dontMatch =/\w/; if(child.nodeValue.charAt(index - 1).match(dontMatch) || child.nodeValue.charAt(index+keyword.length).match(dontMatch)) break; // alert(child.nodeValue.charAt(index+keyword.length + 1)); callback.call(window, child, index) } } } } // Linkup function, call with various type cases (below) function linkup(node, index) { node.splitText(index+keyword.length); var a= document.createElement('a'); a.href= linkUrl; a.appendChild(node.splitText(index)); node.parentNode.insertBefore(a, node.nextSibling); limit.push(keyword.toLowerCase()); // Add the keyword to memory urlMemory.push(linkUrl); // Add the url to memory } // lower case (already applied) findPlainTextExceptInLinks(lbp.vrs.holder, keyword, linkup); Thanks in advance for your help. I'm nearly ready to launch the script, and will gladly comment in kudos to you for your assistance.

    Read the article

  • T-SQL Table Joins - Unique Situation

    - by Dimitri
    Hello Everyone. This is my first time encountering the case like this and don't quite know how to handle. Situation: I have one table tblSettingsDefinition, with fields: ID, GroupID, Name, typeID, DefaultValue. Then I have tblSettingtypes with fields TypeID, Name. And I have final table, tblUserSettings with fields SettingID, SettingDefinitionID, UserID, Value. The whole point of this is to have customizable settings. Setting can be defined for a Group or as global setting (if GroupID is NULL). It will have a default value, but if user modifies the setting, an entry is added to tblUserSettings that stores new value. I want to have a query that grabs user settings by first looking at the tblUserSettings, and if it has records for the given user, grabs them, if not retrieves default settings. But the trick is that no matter if user has settings or not, I need to have fields from other two table retrieved to know the setting's Type, Name etc... (which are stored in those other tables). I'm writing query something like this: SELECT * FROM tblSettingDefinition SD LEFT JOIN tblUserSettings US ON SD.SettingID = US.SettingDefinitionID JOIN tblSettingTypes ST ON SD.TypeID=ST.ID WHERE US.UserID=@UserID OR ((SD.GroupID IS NULL) OR (SD.GroupID=(SELECT GroupID FROM tblUser WHERE ID=@UserID))) but it retrieves settings for all users from tblUserSettings instead of just ones that match current @UserID. And if @UserID has no records in tblUserSettings, still, all user settings are retrieved instead of the defaults from tblSettingDefinition. Hope I made myself clear. Any help would be highly appreciated. Thank you.

    Read the article

  • Why don't file type filters work properly with nsIFilePicker on Mac OSX?

    - by Eric Strom
    I am running a chrome app in firefox (started with -app) with the following code to open a filepicker: var nsIFilePicker = Components.interfaces.nsIFilePicker; var fp = Components.classes["@mozilla.org/filepicker;1"] .createInstance(nsIFilePicker); fp.init(window, "Select Files", nsIFilePicker.modeOpenMultiple); fp.appendFilter("video", "*.mov; *.mpg; *.mpeg; *.avi; *.flv; *.m4v; *.mp4"); fp.appendFilter("all", "*.*"); var res = fp.show(); if (res == nsIFilePicker.returnCancel) return; var files = fp.files; var paths = []; while (files.hasMoreElements()) { var arg = files.getNext().QueryInterface( Components.interfaces.nsILocalFile ).path; paths.push(arg); } Everything seems to work fine on Windows, and the file picker itself works on OSX, but the dropdown menu to select between file types only displays in Windows. The first filter (video in this case) is in effect, but the dropdown to select the other type never shows. Is there something extra that is needed to get this working on OSX? I have tried the latest firefox (3.6) and an older one (3.0.13) and both don't show the file type dropdown on OSX.

    Read the article

  • Reformat SQLGeography polygons to JSON

    - by James
    I am building a web service that serves geographic boundary data in JSON format. The geographic data is stored in an SQL Server 2008 R2 database using the geography type in a table. I use [ColumnName].ToString() method to return the polygon data as text. Example output: POLYGON ((-6.1646509904325884 56.435153006374627, ... -6.1606079906751 56.4338050060666)) MULTIPOLYGON (((-6.1646509904325884 56.435153006374627 0 0, ... -6.1606079906751 56.4338050060666 0 0))) Geographic definitions can take the form of either an array of lat/long pairs defining a polygon or in the case of multiple definitions, an array or polygons (multipolygon). I have the following regex that converts the output to JSON objects contained in multi-dimensional arrays depending on the output. Regex latlngMatch = new Regex(@"(-?[0-9]{1}\.\d*)\s(\d{2}.\d*)(?:\s0\s0,?)?", RegexOptions.Compiled); private string ConvertPolysToJson(string polysIn) { return this.latlngMatch.Replace(polysIn.Remove(0, polysIn.IndexOf("(")) // remove POLYGON or MULTIPOLYGON .Replace("(", "[") // convert to JSON array syntax .Replace(")", "]"), // same as above "{lng:$1,lat:$2},"); // reformat lat/lng pairs to JSON objects } This is actually working pretty well and converts the DB output to JSON on the fly in response to an operation call. However I am no regex master and the calls to String.Replace() also seem inefficient to me. Does anyone have any suggestions/comments about performance of this?

    Read the article

  • Converting non-delimited text into name/value pairs in Delphi

    - by robsoft
    I've got a text file that arrives at my application as many lines of the following form: <row amount="192.00" store="10" transaction_date="2009-10-22T12:08:49.640" comp_name="blah " comp_ref="C65551253E7A4589A54D7CCD468D8AFA" name="Accrington "/> and I'd like to turn this 'row' into a series of name/value pairs in a given TStringList (there could be dozens of these <row>s in the file, so eventually I will want to iterate through the file breaking each row into name/value pairs in turn). The problem I've got is that the data isn't obviously delimited (technically, I suppose it's space delimited). Now if it wasn't for the fact that some of the values contain leading or trailing spaces, I could probably make a few reasonable assumptions and code something to break a row up based on spaces. But as the values themselves may or may not contain spaces, I don't see an obvious way to do this. Delphi' TStringList.CommaText doesn't help, and I've tried playing around with Delimiter but I get caught-out by the spaces inside the values each time. Does anyone have a clever Delphi technique for turning the sample above into something resembling this? ; amount="192.00" store="10" transaction_date="2009-10-22T12:08:49.640" comp_name="blah " comp_ref="C65551253E7A4589A54D7CCD468D8AFA" name="Accrington " Unfortunately, as is usually the case with this kind of thing, I don't have any control over the format of the data to begin with - I can't go back and 'make' it comma delimited at source, for instance. Although I guess I could probably write some code to turn it into comma delimited - would rather find a nice way to work with what I have though. This would be in Delphi 2007, if it makes any difference.

    Read the article

  • In Java, howd do I iterate through lines in a textfile from back to front

    - by rogue780
    Basically I need to take a text file such as : Fred Bernie Henry and be able to read them from the file in the order of Henry Bernie Fred The actual file I'm reading from is 30MB and it would be a less than perfect solution to read the whole file, split it into an array, reverse the array and then go from there. It takes way too long. My specific goal is to find the first occurrence of a string (in this case it's "InitGame") and then return the position beginning of the beginning of that line. I did something like this in python before. My method was to seek to the end of the file - 1024, then read lines until I get to the end, then seek another 1024 from my previous starting point and, by using tell(), I would stop when I got to the previous starting point. So I would read those blocks backwards from the end of the file until I found the text I was looking for. So far, I'm having a heck of a time doing this in Java. Any help would be greatly appreciated and if you live near Baltimore it may even end up with you getting some fresh baked cookies. Thanks!

    Read the article

  • How does browser know when to prompt user to save password?

    - by Eric
    This is related to the question I asked here: http://stackoverflow.com/questions/2382329/how-can-i-get-browser-to-prompt-to-save-password This is the problem: I CAN'T get my browser to prompt me to save the password for the site I'm developing. (I'm talking about the bar that appears sometimes when you submit a form on Firefox, that says "Remember the password for yoursite.com? Yes / Not now / Never") This is super frustrating because this feature of Firefox (and most other modern browsers, which I hope work in a similar fashion) seems to be a mystery. It's like a magic trick the browser does, where it looks at your code, or what you submit, or something, and if it "looks" like a login form with a username (or email address) field and a password field, it offers to save. Except in this case, where it's not offering my users that option after they use my login form, and it's making me nuts. :-) (I checked my Firefox settings-- I have NOT told the browser "never" for this site. It should be prompting.) My question: exactly what the heuristics are that Firefox (or any other modern browser) uses to know when it should prompt the user to save? This shouldn't be too difficult to answer, since it's right there in the Mozilla source (I don't know where to look or else I'd try to dig it out myself). You'd think there would be a blog post or some other similar developer note from the Mozilla developers about this but I can't find that either. (* Note that if your answer to me has anything to do with cookies, encryption or anything else that is about how I'm storing the user's passwords in the database, you've probably misread my question. :-)

    Read the article

  • Plink SSH: '-m file' option not working

    - by Technext
    Hi, I am trying to use Plink for running commands on remote server. Both, local & remote machine are Windows. Though I am able to connect to the remote machine using Plink, i am not able to use the '-m file' option. I tried the following three ways but to no avail: Try 1: plink.exe -ssh -pw mypwd gchhabra@machine -m file.txt Could not chdir to home directory /home/gchhabra: No such file or directory dir: not found file.txt only contains one command i.e., dir Try 2: plink.exe -ssh -pw mypwd gchhabra@machine dir Could not chdir to home directory /home/gchhabra: No such file or directory dir: not found Try 3: plink.exe -ssh -pw mypwd gchhabra@machine < file.txt In this case, I get the following output: Using username "gchhabra". ****USAGE WARNING**** This is a private computer system. This computer system, including all ..... including personal information, placed or sent over this system may be monitored. Use of this computer system, authorized or unauthorized, constitutes consent ... constitutes consent to monitoring for these purposes. dirCould not chdir to home directory /home/gchhabra: No such file or directory Microsoft Windows [Version x.x.xxx] (C) Copyright 1985-2003 Microsoft Corp. C:\Program Files\OpenSSH After I get the above prompt, it hangs. Can anyone please help me with this? Regards, Gaurav

    Read the article

  • Parsing multibyte string in PHP

    - by Petr Peller
    I would like to write a (HTML) parser based on state machine but I have doubts how to acctually read/use an input. I decided to load the whole input into one string and then work with it as with an array and hold its index as current parsing position. There would be no problems with single-byte encoding, but in multi-byte encoding each value does not represent a character, but a byte of a character. Example: $mb_string = 'žšcr'; //4 multi-byte characters in UTF-8 for($i=0; $i < 4; $i++) { echo $mb_string[$i], PHP_EOL; } Outputs: L ž L A This means I cannot iterate through the string in a loop to check single characters, because I never know if I am in the middle of an character or not. So the questions are: How do I multi-byte safe read a single character from a string in a performance friendly way? Is it good idea to work with the string as it was an array in this case? How would you read the input?

    Read the article

  • MyMessage<T> throws an exception when calling XmlSerializer

    - by Arthis
    I am very new to nservicebus. I am using version 3.0.1, the last one up to date. And I wonder if my case is a normal limitation of NSB, I am not aware of. I have an asp.net MVC application, I am trying to setup and in my global.asax, I have the following : var configure = Configure.WithWeb() .DefaultBuilder() .ForMvc() .XmlSerializer(); But I have an error with the XmlSerializer when dealing with one of my object: [Serializable] public class MyMessage<T> : IMessage { public T myobject { get; set; } } I pass trough : XmlSerializer() instance.Initialize(types); this.InitType(type, moduleBuilder); this.InitType(info2.PropertyType, moduleBuilder); and then when dealing With T, string typeName = GetTypeName(t); typename is null and the following instruction : if (!nameToType.ContainsKey(typeName)) ends in error. null value not allowed. Is this some limitations to Nservicebus, or am I messing something up?

    Read the article

  • Thread Message Loop Hangs in Delphi

    - by erikjw
    Hello all. I have a simple Delphi program that I'm working on, in which I am attempting to use threading to separate the functionality of the program from its GUI, and to keep the GUI responsive during more lengthy tasks, etc. Basically, I have a 'controller' TThread, and a 'view' TForm. The view knows the controller's handle, which it uses to send the controller messages via PostThreadMessage. I have had no problem in the past using this sort of model for forms which are not the main form, but for some reason, when I attempt to use this model for the main form, the message loop of the thread just quits. Here is my code for the threads message loop: procedure TController.Execute; var Msg : TMsg; begin while not Terminated do begin if (Integer(GetMessage(Msg, hwnd(0), 0, 0)) = -1) then begin Synchronize(Terminate); end; TranslateMessage(Msg); DispatchMessage(Msg); case Msg.message of // ...call different methods based on message end; end; To set up the controller, I do this: Controller := TController.Create(true); // Create suspended Controller.FreeOnTerminate := True; Controller.Resume; For processing the main form's messages, I have tried using both Application.Run and the following loop (immediately after Controller.Resume) while not Application.Terminated do begin Application.ProcessMessages; end; I've run stuck here - any help would be greatly appreciated.

    Read the article

  • Trouble Shoot JavaScript Function in IE

    - by CreativeNotice
    So this function works fine in geko and webkit browsers, but not IE7. I've busted my brain trying to spot the issue. Anything stick out for you? Basic premise is you pass in a data object (in this case a response from jQuery's $.getJSON) we check for a response code, set the notification's class, append a layer and show it to the user. Then reverse the process after a time limit. function userNotice(data){ // change class based on error code returned var myClass = ''; if(data.code == 200){ myClass='success'; } else if(data.code == 400){ myClass='error'; } else{ myClass='notice'; } // create message html, add to DOM, FadeIn var myNotice = '<div id="notice" class="ajaxMsg '+myClass+'">'+data.msg+'</div>'; $("body").append(myNotice); $("#notice").fadeIn('fast'); // fadeout and remove from DOM after delay var t = setTimeout(function(){ $("#notice").fadeOut('slow',function(){ $(this).remove(); }); },5000); }

    Read the article

  • auto m3u creation

    - by newbie69
    Hi, I am looking for a solution to automatically create .m3u playlists for each music folder in my sdcard so that the music player can play music by folders. I had written a simple VB.Net app in the past that does exactly the above but apparently, it has to be run from Windows. Since I have no Java nor Android developing experience I found it quite hard to try to write a similar app that can be run directly from the phone. In a few words, the app does the following: 1) Searches the SD and lists all folders that contain 2 or more .mp3 files (just for user verification) 2) Creates in every listed folder above, a .m3u file that simply lists line-by-line all the mp3 files that exist in the specific folder. Is there such an app or could someone spare some time and give me some rough instructions on how to create it in Eclipse 3.5.2 environment? (device used: Motorola Droid/Milestone, Android 2.1) I don't care about any graphics or complex UI, just a script to execute the above procedure that would give every "playlist-supporting" music player in Android, the precious ability to play music by folders. I know it is too much to ask but just in case! Thanx in advance.

    Read the article

  • .NET 4: Process.Start using credentials returns empty output

    - by alexey
    I run an external program from ASP.NET: var process = new Process(); var startInfo = process.StartInfo; startInfo.FileName = filePath; startInfo.Arguments = arguments; startInfo.UseShellExecute = false; startInfo.RedirectStandardOutput = true; startInfo.RedirectStandardError = true; process.Start(); process.WaitForExit(); Console.Write("Output: {0}", process.StandardOutput.ReadToEnd()); Console.Write("Error Output: {0}", process.StandardError.ReadToEnd()); Everything works fine with this code: the external program is executed and process.StandardOutput.ReadToEnd() returns the correct output. But after I add these two lines before process.Start() (to run the program in the context of another user account): startInfo.UserName = userName; startInfo.Password = securePassword; The program is not executed and process.StandardOutput.ReadToEnd() returns an empty string. No exceptions are thrown. userName and securePassword are correct (in case of incorrect credentials an exception is thrown). How to run the program in the context of another user account?

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

  • Why aren't google api clients built on top of Apache's Abdera project ?

    - by lisak
    Hey, Could anybody please explain that to me? As far as I can see, the developers of java's google api client library are reinventing the wheel. It's like writing a new JDK for a Java project. I'm aware of the fact that google data protocol is a little specific re atom publishing, but if one needs to use some of the fancy extensions and features that Apache Abdera project offers for this protocol, it is better not to use google api client library and implement the client from scratch with Abdera... And I'm sure that in a lot of cases its features such as Abdera's JCR adapter would become very handy for google docs, google translator toolkit and others. Now it's great that there is a google api client library to be used for google docs, but what am I going to do with the documents? I believe that in more than a half cases there is also a repository or database on the other side. And in that case, abdera is needed, not the simple google api clients that are only marshalling/unmarshalling the feeds... In fact, there is something to persist in all of the google APIs. It would make sense, if google decided to invest the effort into Abdera enhancement... This doesn't... Also for the question to be more specific: How are you developing google api clients, that need entry persistence (JCR for instance) ? What would be the best way to integrate a google api client library with Apache Abdera ?

    Read the article

  • .NET RegEx - First N chars of First M lines

    - by George
    Hello! I want 4 general RegEx expressions for the following 4 basic cases: Up to A chars starting after B chars from start of line on up to C lines starting after D lines from start of file Up to A chars starting after B chars from start of line on up to C lines occurring before D lines from end of file Up to A chars starting before B chars from end of line on up to C lines starting after D lines from start of file Up to A chars starting before B chars from end of line on up to C lines starting before D lines from end of file These would allow to select arbitrary text blocks anywhere in the file. So far I have managed to come up with cases that only work for lines and chars separately: (?<=(?m:^[^\r]{N}))[^\r]{1,M} = UP TO M chars OF EVERY LINE, AFTER FIRST N chars [^\r]{1,M}(?=(?m:.{N}\r$)) = UP TO M chars OF EVERY LINE, BEFORE LAST N chars The above 2 expressions are for chars, and they return MANY matches (one for each line). (?<=(\A([^\r]*\r\n){N}))(?m:\n*[^\r]*\r$){1,M} = UP TO M lines AFTER FIRST N lines (((?=\r?)\n[^\r]*\r)|((?=\r?)\n[^\r]+\r?)){1,M}(?=((\n[^\r]*\r)|(\n[^\r]+\r?)){N}\Z) = UP TO M lines BEFORE LAST N lines from end These 2 expressions are equivalents for the lines, but they always return just ONE match. The task is to combine these expressions to allow for scenarios 1-4. Anyone can help? Note that the case in the title of the question, is just a subclass of scenario #1, where both B = 0 and D = 0. EXAMPLE: SOURCE: line1 blah 1 line2 blah 2 line3 blah 3 line4 blah 4 line5 blah 5 line6 blah 6 DESIRED RESULT: Characters 3-6 of lines 3-5: A total of 3 matches: <match>ne3 </match> <match>ne4 </match> <match>ne5 </match>

    Read the article

  • Convert UCS-2 characters to UTF-8 Using C#

    - by quanticle
    I'm pulling some internationalized text from a MS SQL Server 2005 database. As per the defaults for that DB, the characters are stored as UCS-2. However, I need to output the data in UTF-8 format, as I'm sending it out over the web. Currently, I have the following code to convert: SqlString dbString = resultReader.GetSqlString(0); byte[] dbBytes = dbString.GetUnicodeBytes(); byte[] utf8Bytes = System.Text.Encoding.Convert(System.Text.Encoding.Unicode, System.Text.Encoding.UTF8, dbBytes); System.Text.UTF8Encoding encoder = new System.Text.UTF8Encoding(); string outputString = encoder.GetString(utf8Bytes); However, when I examine the output in the browser, it appears to be garbage, no matter what I set the encoding to. What am I missing? EDIT: In response to the answers below, the reason I thought I had to perform a conversion is because I can output literal multibyte strings just fine. For example: OutputControl.Text = "????????????????????????????????????????????????????????????????"; works. Here, OutputControl is an ASP.Net Literal. However, OutputControl.Text = outputString; //Output from above snippet results in mangled output as described above. My hypothesis was that the database's output was somehow getting mangled by ASP.Net. If that's not the case, then what are some other possibilities?

    Read the article

  • Use external datasource with NUnit's TestCaseAttribute

    - by Hamman359
    Is it possible to get the values for a TestCaseAttribute from an external data source such as an Excel Spreadsheet, CSV file or Database? i.e. Have a .csv file with 1 row of data per test case and pass that data to NUnit one at a time. Here's the specific situation that I'd like to use this for. I'm currently merging some features from one system into another. This is pretty much just a copy and paste process from the old system into the new one. Unfortunately, the code being moved not only does not have any tests, but is not written in a testable manner (i.e. tightly coupled with the database and other code.) Taking the time to make the code testable isn't really possible since its a big mess, i'm on a tight schedule and the entire feature is scheduled to be re-written from the ground up in the next 6-9 months. However, since I don't like the idea of not having any tests around the code, I'm going to create some simple Selenium tests using WebDriver to test the page through the UI. While this is not ideal, it's better than nothing. The page in question has about 10 input values and about 20 values that I need to assert against after the calculations are completed, with about 30 valid combinations of values that I'd like to test. I already have the data in a spreadsheet so it'd be nice to simply be able to pull that out rather than having to re-type it all in Visual Studio.

    Read the article

  • PostgreSQL insert on primary key failing with contention, even at serializable level

    - by Steven Schlansker
    I'm trying to insert or update data in a PostgreSQL db. The simplest case is a key-value pairing (the actual data is more complicated, but this is the smallest clear example) When you set a value, I'd like it to insert if the key is not there, otherwise update. Sadly Postgres does not have an insert or update statement, so I have to emulate it myself. I've been working with the idea of basically SELECTing whether the key exists, and then running the appropriate INSERT or UPDATE. Now clearly this needs to be be in a transaction or all manner of bad things could happen. However, this is not working exactly how I'd like it to - I understand that there are limitations to serializable transactions, but I'm not sure how to work around this one. Here's the situation - ab: => set transaction isolation level serializable; a: => select count(1) from table where id=1; --> 0 b: => select count(1) from table where id=1; --> 0 a: => insert into table values(1); --> 1 b: => insert into table values(1); --> ERROR: duplicate key value violates unique constraint "serial_test_pkey" Now I would expect it to throw the usual "couldn't commit due to concurrent update" but I'm guessing since the inserts are different "rows" this does not happen. Is there an easy way to work around this?

    Read the article

  • Is ASP.NET MVC is really MVC? Or how to separate model from controller?

    - by Andrey
    Hi all, This question is a bit rhetorical. At some point i got a feeling that ASP.NET MVC is not that authentic implementation of MVC pattern. Or i didn't understood it. Consider following domain: electric bulb, switch and motion detector. They are connected together and when you enter the room motion detector switches on the bulb. If i want to represent them as MVC: switch is model, because it holds the state and contains logic bulb is view, because it presents the state of model to human motion detector is controller, because it converts user actions to generic model commands Switch has one private field (On/Off) as a State and two methods (PressOn, PressOff). If you call PressOn when it is Off it goes to On, if you call it again state doesn't change. Bulb can be replaced with buzzer, motion detector with timer or button, but the model still represent the same logic. Eventually system will have same behavior. This is how i understand classical MVC decomposition, please correct me if i am wrong. Now let's decompose it in ASP.Net MVC way. Bulb is still a view Controller will be switch + motion detector Model is some object that will just pass state to bulb. So the logic that defines behavior moves to controller. Question 1: Is my understanding of MVC and ASP.NET MVC correct? Question 2: If yes, do you agree that ASP.NET MVC is not 100% accurate implementation? And back to life. The final question is how to separate model from controller in case of ASP.NET MVC. There can be two extremes. Controller does basic stuff and call model to do all the logic. Another is controller does all the logic and model is just something like class with properties that is mapped to DB. Question 3: Where should i draw the line between this extremes? How to balance? Thanks, Andrey

    Read the article

< Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >