Search Results

Search found 27142 results on 1086 pages for 'control structure'.

Page 858/1086 | < Previous Page | 854 855 856 857 858 859 860 861 862 863 864 865  | Next Page >

  • spring - constructor injection and overriding parent definition of nested bean

    - by mdma
    I've read the Spring 3 reference on inheriting bean definitions, but I'm confused about what is possible and not possible. For example, a bean that takes a collaborator bean, configured with the value 12 <bean name="beanService12" class="SomeSevice"> <constructor-arg index="0"> <bean name="beanBaseNested" class="SomeCollaborator"> <constructor-arg index="0" value="12"/> </bean> </constructor-arg> </bean> I'd then like to be able to create similar beans, with slightly different configured collaborators. Can I do something like <bean name="beanService13" parent="beanService12"> <constructor-arg index="0"> <bean> <constructor-arg index="0" value="13"/> </bean> </constructor> </bean> I'm not sure this is possible and, if it were, it feels a bit clunky. Is there a nicer way to override small parts of a large nested bean definition? It seems the child bean has to know quite a lot about the parent, e.g. constructor index. I'd prefer not to change the structure - the parent beans use collaborators to perform their function, but I can add properties and use property injection if that helps. This is a repeated pattern, would creating a custom schema help? Thanks for any advice!

    Read the article

  • Get the current array key in a multi dimensional array

    - by johlton
    Hi *, I have a session array *$_SESSION['cart']* with some items in it. The structure ist like this (via *print_r*): Array ( [2-1] => Array ( [color] => 7 [articlenumber] => WRG70 10 [quantity] => 1 [price] => 17.50 ) [3-8] => Array ( [color] => 2 [articlenumber] => QRG50 02 [quantity] => 1 [price] => 13.50 ) ) Looping over the values for display is fine ... foreach($_SESSION['cart'] as $item_array) { foreach($item_array as $item => $value) { echo $value . ' | '; } } ... since it results in something like this: 7 | WRG70 10 | 1 | 17.50 | 2 | QRG50 02 | 1 | 13.50 | But Now: How can I output the matching key (e.g. '2-1') as well? I tried some array functions like key() & current but couldn't get it to work (one of these days). Any quick hint on this? Thanks a lot and best from Berlin Fabian

    Read the article

  • XML file creation Using XDocument in C#

    - by Pramodh
    i've a list (List< string) "sampleList" which contains Data1 Data2 Data3... How to create an XML file using XDocument by iterating the items in the list in c sharp. The file structure is like <file> <name filename="sample"/> <date modified =" "/> <info> <data value="Data1"/> <data value="Data2"/> <data value="Data3"/> </info> </file> Now i'm Using XmlDocument to do this Example List<string> lst; XmlDocument XD = new XmlDocument(); XmlElement root = XD.CreateElement("file"); XmlElement nm = XD.CreateElement("name"); nm.SetAttribute("filename", "Sample"); root.AppendChild(nm); XmlElement date = XD.CreateElement("date"); date.SetAttribute("modified", DateTime.Now.ToString()); root.AppendChild(date); XmlElement info = XD.CreateElement("info"); for (int i = 0; i < lst.Count; i++) { XmlElement da = XD.CreateElement("data"); da.SetAttribute("value",lst[i]); info.AppendChild(da); } root.AppendChild(info); XD.AppendChild(root); XD.Save("Sample.xml"); please help me to do this

    Read the article

  • Reading numpy arrays outside of Python

    - by Abiel
    In a recent question I asked about the fastest way to convert a large numpy array to a delimited string. My reason for asking was because I wanted to take that plain text string and transmit it (over HTTP for instance) to clients written in other programming languages. A delimited string of numbers is obviously something that any client program can work with easily. However, it was suggested that because string conversion is slow, it would be faster on the Python side to do base64 encoding on the array and send it as binary. This is indeed faster. My question now is, (1) how can I make sure my encoded numpy array will travel well to clients on different operating systems and different hardware, and (2) how do I decode the binary data on the client side. For (1), my inclination is to do something like the following import numpy as np import base64 x = np.arange(100, dtype=np.float64) base64.b64encode(x.tostring()) Is there anything else I need to do? For (2), I would be happy to have an example in any programming language, where the goal is to take the numpy array of floats and turn them into a similar native data structure. Assume we have already done base64 decoding and have a byte array, and that we also know the numpy dtype, dimensions, and any other metadata which will be needed. Thanks.

    Read the article

  • JSF 2.0 Dynamic Views

    - by Robe Eleckers
    Hello, I'm working on a web project which uses JSF 2.0, PrimeFaces and PrettyFaces as main frameworks / libraries. The pages have the following (common) structure: Header, Content, Footer. Header: The Header always contains the same menu. This menu is a custom component, which generates a recursive html <ul><li> list containing <a href="url"> html links, this is all rendered with a custom renderer. The link looks like 'domain.com/website/datatable.xhtml?ref=2'. Where the ref=2 used to load the correct content from the database. I use prettyfaces to store this request value in a backingbean. Question 1: Is it ok to render the <a href> links myself, or should I better add an HTMLCommandLink from my UIComponent and render that in the encodeBegin/End? Question 2: I think passing variables like this is not really the JSF 2.0 style, how to do this in a better way? Content: The content contains dynamic data. It can be a (primefaces) datatable, build with dynamic data from the database. It can also be a text page, also loaded from the database. Or a series of graphs. You got the point, it's dynamic. The content is based on the link pressed in the header menu. If the content is of type datatable, then I put the ref=2 variable to a DataTableBean (via prettyfaces), which then loads the correct datatable from the database. If the content is of type chart, I'll put it on the ChartBean. Question 3: Is this a normal setup? Ideally I would like to update my content via Ajax. I hope it's clear :)

    Read the article

  • asyncronous call doesn't return json

    - by Rebecca
    I am running wamp on an xp box. I am fairly new to web programming, this is for a student project, and have run out of avenues to try to solve this problem. Problem We have client side JavaScript code that uses GDownloadUrl- from the Google api- to wrap xmlHttpRequest calls to a php server side program that is accessing our database. In my callback program, the result of this call is always " ". However, if I use an alert to display the http:// call, with the arguments, and cut and paste that into my browser, the json I expected is displayed. I zipped my dir containing all the files, and tried it out on another team member's computer, and they were able to get the json in the callback function. Note this is exactly the same code and structure I was using, he just unzipped and ran. So now I'm thinking this is something about Firefox or Wamp? Would this be a config problem? I'm running wamp server 2.0, and Firefox 3.5.8. I have no problems with syncronous php, or reading in files asyncronously. Any help would be greatly appreciated. Rebecca

    Read the article

  • How to set up single array or dictionary for use in multiple datasources?

    - by Roman
    I have multiple TableViewDatasources that need to display list of objects form same pool depending of certain property. E.g. object.flag1 is set- it will show up in TableView1 object.flag2 is set- it will show up in TableView2 The obvious way would be to have separate arrays for each TableView, But same object may appear in different arrays. Also I need to update objects very often or access all objects through same array. How do I setup a single dictionary or array to have all objects in one structure? To put it in another way: When table view or selection changes, application need to redraw TableViews with the new data. Application have to access the pool of objects and search through them using iterator and accessing each object and its properties. I think that this is an expensive operation and want to avoid that. Perhaps maybe by making a global pool of objects a dictionary and exposing objects properties as dictionary fields. So instead of iterating global pool of objects I could access global pool Dicitonary in a manner of database by selecting objects that has fields that match particular criteria. Anyone know any example of doing that?

    Read the article

  • Bulk Insert of hundreds of millions of records

    - by Dave Jarvis
    What is the fastest way to insert 237 million records into a table that has rules (for distributing the data across 84 child tables)? First I tried inserts. No go. Then I tried inserts with BEGIN/COMMIT. Not nearly fast enough. Next, I tried COPY FROM, but then noticed the documentation states that the rules are ignored. (And it was having difficulties with the column order and date format -- it said that '1984-07-1' was not a valid integer; true, but a bit unexpected.) Some example data: station_id,taken,amount,category_id,flag 1,'1984-07-1',0,4, 1,'1984-07-2',0,4, 1,'1984-07-3',0,4, 1,'1984-07-4',0,4,T Here is the table structure (with one rule included): CREATE TABLE climate.measurement ( id bigserial NOT NULL, station_id integer NOT NULL, taken date NOT NULL, amount numeric(8,2) NOT NULL, category_id smallint NOT NULL, flag character varying(1) NOT NULL DEFAULT ' '::character varying ) WITH ( OIDS=FALSE ); ALTER TABLE climate.measurement OWNER TO postgres; CREATE OR REPLACE RULE i_measurement_01_001 AS ON INSERT TO climate.measurement WHERE date_part('month'::text, new.taken)::integer = 1 AND new.category_id = 1 DO INSTEAD INSERT INTO climate.measurement_01_001 (id, station_id, taken, amount, category_id, flag) VALUES (new.id, new.station_id, new.taken, new.amount, new.category_id, new.flag); I can generate the data into any format. Am looking for something that won't take four days. I originally had the data in MySQL (still do), but am hoping to get a performance increase by switching to PostgreSQL and am eager to use its PL/R extensions for stats. I was also thinking about using: http://pgbulkload.projects.postgresql.org/ Any help, tips, or guidance would be greatly appreciated. Thank you!

    Read the article

  • Java: how to tell if a line in a text file was supposed to be blank?

    - by defn
    I'm working on a project in which I have to read in a Grammar file (breaking it up into my data structure), with the goal of being able to generate a random "DearJohnLetter". My problem is that when reading in the .txt file, I don't know how find out whether the file was supposed to be a completely blank line or not, which is detrimental to the program. Here is an example of part of the file, How do i tell if the next line was supposed to be a blank line? (btw I'm just using a buffered reader) Thanks! <start> I have to break up with you because <reason> . But let's still <disclaimer> . <reason> <dubious-excuse> <dubious-excuse> , and also because <reason> <dubious-excuse> my <person> doesn't like you I'm in love with <another> I haven't told you this before but <harsh> I didn't have the heart to tell you this when we were going out, but <harsh> you never <romantic-with-me> with me any more you don't <romantic> any more my <someone> said you were bad news

    Read the article

  • Fading out an article and dropping down a form at the same time?

    - by eveo
    I have an article: <article> Some paragraphs. </article> Below that I have my contact: <div id="contact"> stuff, this form is 600 px tall </div> contact is set to display:none;. I use jQuery to toggle it. What I'm trying to do is modify this script from http://tutsplus.com/lesson/slides-and-structure/ so that it fades out the article text and then slides in the contact form. Having some trouble. Code: <script> (function() { $('html').addClass('js'); var contactForm = { container: $('#contact'), article: $('article'), init: function() { $('<button></button>', { text: 'Contact Me' }) .insertAfter('article:first') .on('click', function() { console.log(this); this.show(); // contactForm.article.fadeToggle(300); // contactForm.container.show(); }) }, show: function() { contactForm.close.call(contactForm.container); contactForm.container.slideToggle(300); }, close: function() { console.log(this); $('<span class=close>X</span>') .prependTo(this) .on('click', function(){ contactForm.article.fadeToggle(300); contactForm.container.slideToggle(300); }) } }; contactForm.init(); })(); </script> The part that is not working is: .on('click', function() { console.log(this); this.show(); // contactForm.article.fadeToggle(300); // contactForm.container.show(); }) When I do .on('click', this.show); it works fine, when I put this.show in a function it does not work!

    Read the article

  • How do i pass arbitary date format from C# to sql backend

    - by Jims
    I have a datetime field for the transaction date in the back end. So I am passing that date from front C#.net, in the below format: 2011-01-01 12:17:51.967 to do this I have written: presentation layer: string date = DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fff", CultureInfo.InvariantCulture); PropertyClass prp=new PropertyClass(); Prp.TransDate=Convert.ToDateTime(date); PropertyClass structure: Public class property { private DateTime transdate; public DateTime TransDate { get { return transdate; } set { transdate = value; } } } From DAL layer passing the TransactionDate like this: Cmd.Parameters.AddWithValue("@TranSactionDate”, SqlDbType.DateTime).value=propertyobj.TransDate; While debugging from presntation layer: string date = DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fff", CultureInfo.InvariantCulture); in this I am getting correct expected date format, but when debugs goes to this line Prp.TransDate=Convert.ToDateTime(date); again date format changing to 1/1/2011. But my backend sql datefield wants the date paramter 2011-01-01 12:17:51.967 in this format otherwise throwing exception invalid date format. Note: While passing date as string without converting to datetime getting exceptions like: System.Data.SqlTypes.SqlTypeException: SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM. at System.Data.SqlTypes.SqlDateTime.FromTimeSpan(TimeSpan value) at System.Data.SqlTypes.SqlDateTime.FromDateTime(DateTime value) at System.Data.SqlTypes.SqlDateTime..ctor(DateTime value) at System.Data.SqlClient.MetaType.FromDateTime(DateTime dateTime, Byte cb) at System.Data.SqlClient.TdsParser.WriteValue(Object value, MetaType type, Byte scale, Int32 actualLength, Int32 encodingByteSize, Int32 offset, TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.TdsExecuteRPC(_SqlRPC[] rpcArray, Int32 timeout, Boolean inSchema, SqlNotificationRequest notificationRequest, TdsParserStateObject stateObj, Boolean isCommandProc)

    Read the article

  • How to distribute an offline cube for excel

    - by Mike M
    I have the following scenario. A cube created in SSAS 2008. I can connected to this cube via Excel. I can create an offline cube file. I can connect to this offline cube file. Now, say I want to email this excel file along with the cube file so that another user can view it. I run into the problem that the connection path the offline cube is hard coded into the excel file. Its the same problem this person had. http://stackoverflow.com/questions/1253950/opening-offline-cube-from-another-machine Their solution was to just make sure the other user saved the cube in the same directory structure. I don't love that solution. I also came across this idea: http://www.pcreview.co.uk/forums/thread-948974.php I tried that, it errored out, but I am not an Excel VBA programmer and really have no idea if I even put the code in the right place. So anyway, anyone out there have any ideas about who to do this? If the VBA solution is the best, could someone give me some tips on where to actually put that code?

    Read the article

  • Have I taken a wrong path in programming by being excessively worried about code elegance and style?

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • Nhibernate join on a table twice

    - by Zuber
    Consider the following Class structure... public class ListViewControl { public int SystemId {get; set;} public List<ControlAction> Actions {get; set;} public List<ControlAction> ListViewActions {get; set;} } public class ControlAction { public string blahBlah {get; set;} } I want to load class ListViewControl eagerly using NHibernate. The mapping using Fluent is as shown below public UIControlMap() { Id(x => x.SystemId); HasMany(x => x.Actions) .KeyColumn("ActionId") .Cascade.AllDeleteOrphan() .AsBag() .Cache.ReadWrite().IncludeAll(); HasMany(x => x.ListViewActions) .KeyColumn("ListViewActionId") .Cascade.AllDeleteOrphan() .AsBag() .Cache.ReadWrite().IncludeAll(); } This is how I am trying to load it eagerly var baseActions = DetachedCriteria.For<ListViewControl>() .CreateCriteria("Actions", JoinType.InnerJoin) .SetFetchMode("BlahBlah", FetchMode.Eager) .SetResultTransformer(new DistinctRootEntityResultTransformer()); var listViewActions = DetachedCriteria.For<ListViewControl>() .CreateCriteria("ListViewActions", JoinType.InnerJoin) .SetFetchMode("BlahBlah", FetchMode.Eager) .SetResultTransformer(new DistinctRootEntityResultTransformer()); var listViews = DetachedCriteria.For<ListViewControl>() .SetFetchMode("Actions", FetchMode.Eager) .SetFetchMode("ListViewActions",FetchMode.Eager) .SetResultTransformer(new DistinctRootEntityResultTransformer()); var result = _session.CreateMultiCriteria() .Add("listViewActions", listViewActions) .Add("baseActions", baseActions) .Add("listViews", listViews) .SetResultTransformer(new DistinctRootEntityResultTransformer()) .GetResult("listViews"); Now, my problem is that the class ListViewControl get the correct records in both Actions and ListViewActions, but there are multiple entries of the same record. The number of records is equal to the number of joins made to the ControlAction table, in this case two. How can I avoid this? If I remove the SetFetchMode from the listViews query, the actions are loaded lazily through a proxy which I don't want.

    Read the article

  • Referencing object's identity before submitting changes in LINQ

    - by Axarydax
    Hi, is there a way of knowing ID of identity column of record inserted via InsertOnSubmit beforehand, e.g. before calling datasource's SubmitChanges? Imagine I'm populating some kind of hierarchy in the database, but I wouldn't want to submit changes on each recursive call of each child node (e.g. if I had Directories table and Files table and am recreating my filesystem structure in the database). I'd like to do it that way, so I create a Directory object, set its name and attributes, then InsertOnSubmit it into DataContext.Directories collection, then reference Directory.ID in its child Files. Currently I need to call InsertOnSubmit to insert the 'directory' into the database and the database mapping fills its ID column. But this creates a lot of transactions and accesses to database and I imagine that if I did this inserting in a batch, the performance would be better. What I'd like to do is to somehow use Directory.ID before commiting changes, create all my File and Directory objects in advance and then do a big submit that puts all stuff into database. I'm also open to solving this problem via a stored procedure, I assume the performance would be even better if all operations would be done directly in the database.

    Read the article

  • Running an existing LINQ query against a dynamic object (DataTable like)

    - by TomTom
    Hello, I am working on a generic OData provider to go against a custom data provider that we have here. Thsi is fully dynamic in that I query the data provider for the table it knows. I have a basic storage structure in place so far based on the OData sample code. My problem is: OData supports queries and expects me to hand in an IQueryable implementation. On the lowe rside, I dont have any query support. Not a joke - the provider returns tables and the WHERE clause is not supported. Performance is not an issue here - the tables are small. It is ok to sort them in the OData provider. My main problem is this. I submit a SQL statement to get out the data of a table. The result is some sort of ADO.NET data reader here. I need to expose an IQueryable implementation for this data to potentially allow later filtering. Any ide ahow to best touch that? .NET 3.5 only (no 4.0 planned for some time). I was seriously thinking of creating dynamic DTO classes for every table (emitting bytecode) so I can use standard LINQ. Right now I am using a dictionary per entry (not too efficient) but I see no real way to filter / sort based on them.

    Read the article

  • URL rewriting to a common end point

    - by sunil
    I want to create an asp.net white-label site http://whitelabel.com, that could be styled for each of our clients according to their specific needs. So for example, client abc would see the site in their corporate colours and be accessed through their specific url http://abc.com. Likewise client xyz would see the site in their own styling and url http://xyz.com. Typing either url, in effect, takes the user to http://whitelabel.com where the styling is applied, and the client's url structure is retained. I was thinking of URL rewriting using URLRewriter.Net (http://urlrewriter.net/), or similar, mapping the incoming address to a client id and applying the theme accordingly. So, a url rewrite rule may be something like <rewrite url="http//abc.com/(.+)" to="~/$1?id=1" /> <rewrite url="http//xyz.com/(.+)" to="~/$1?id=2" /> I could then read the id, map it to the client, and with a bit of jiggery-pokery, apply the correct theme. I was wondering if: this is the right approach ? I've overlooked something ? there is a better way to do this ? Any suggestions would be appreciated.

    Read the article

  • How to define custom path to Interop *.dll

    - by NoviceAndNovice
    Well, I have an ActiveX (*.ocx) component, and i use it in a managed C++/CLI project: write a managed wrapper around ActiveX component[ NET has a great Interop services : provides me genarated dll so i can easily use it in my managed code] The problem is that Visual Studio (2008) automatically copy the generated Interop *.dll to the directory where my *.exe file stay.But i want put all my genarated Interop *.dll to a folder ... Suppose My directory structure is so: D:\MyProject\Output\MyProject.exe //My mamanged exe D:\MyProject\Output\Interop.XXXLib.1.0.dll // *Interop .dll I want to put Interop.XXXLib.1.0.dll into new folder D:\MyProject\Output\Interops and use it from that directory...How Can i do it? Best Wishes PS: What I found so far was using using codeBase/ probing tags in my app.config file such as <?xml version="1.0"?> <configuration> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com.asm.v1"> <probing privatePath="Interops" /> </assemblyBinding> </runtime> </configuration> But i did not work in C++/CLI

    Read the article

  • Dumping an ADODB recordset to XML, then back to a recordset, then saving to the db

    - by Mark Biek
    I've created an XML file using the .Save() method of an ADODB recordset in the following manner. dim res dim objXML: Set objXML = Server.CreateObject("MSXML2.DOMDocument") 'This returns an ADODB recordset set res = ExecuteReader("SELECT * from some_table) With res Call .Save(objXML, 1) Call .Close() End With Set res = nothing Let's assume that the XML generated above then gets saved to a file. I'm able to read the XML back into a recordset like this: dim res : set res = Server.CreateObject("ADODB.recordset") res.open server.mappath("/admin/tbl_some_table.xml") And I can loop over the records without any problem. However what I really want to do is save all of the data in res to a table in a completely different database. We can assume that some_table already exists in this other database and has the exact same structure as the table I originally queried to make the XML. I started by creating a new recordset and using AddNew to add all of the rows from res to the new recordset dim outRes : set outRes = Server.CreateObject("ADODB.recordset") dim outConn : set outConn = Server.CreateObject("ADODB.Connection") dim testConnStr : testConnStr = "DRIVER={SQL Server};SERVER=dev-windows\sql2000;UID=myuser;PWD=mypass;DATABASE=Testing" outConn.open testConnStr outRes.activeconnection = outConn outRes.cursortype = adOpenDynamic outRes.locktype = adLockOptimistic outRes.source = "product_accessories" outRes.open while not res.eof outRes.addnew for i=0 to res.fields.count-1 outRes(res.fields(i).name) = res(res.fields(i).name) next outRes.movefirst res.movenext wend outRes.updatebatch But this bombs the first time I try to assign the value from res to outRes. Microsoft OLE DB Provider for ODBC Drivers error '80040e21' Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done. Can someone tell me what I'm doing wrong or suggest a better way for me to copy the data loaded from XML to a different database?

    Read the article

  • (HARD)Remove accents from a JSON response using the raw content

    - by Pentium10
    This is a follow up of this question: Remove accents from a JSON response. The accepted answer there works for a single item/string of a raw JSON content. But I would like to run a full transformation over the entire raw content of the JSON without parsing each object/array/item. What I've tried is this function removeAccents($jsoncontent) { $obj=json_decode($jsoncontent); // use decode to transform the unicode chars to utf $content=serialize($obj); // serialize into string, so the whole obj structure can be used string as a whole $a = 'ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûýýþÿRr'; $b = 'aaaaaaaceeeeiiiidnoooooouuuuybsaaaaaaaceeeeiiiidnoooooouuuyybyRr'; $content=utf8_decode($content); $jsoncontent = strtr($content, $a, $b); // at this point the accents are removed, and everything is good echo $jsoncontent; $obj=unserialize($jsoncontent); // this unserialization is returning false, probably because we messed up with the serialized string return json_encode($obj); } As you see after I decoded JSON content, I serialized the object to have a string of it, than I remove the accents from that string, but this way I have problem building back the object, as the unserialize stuff returns false. How can I fix this?

    Read the article

  • How can I work around WinXP using ports 1025-5000 as ephemeral?

    - by Chris Dolan
    If you create a TCP client socket with port 0 instead of a non-zero port, then the operating system chooses any free ephemeral port for you. Most OSes choose ephemeral ports from the IANA dynamic port range of 49152-65535. However in Windows Server 2003 and earlier (including XP) Microsoft used ports 1025-5000 as the ephemeral range, according to their bind() documentation. I run multiple Java services on the same hardware. On rare occasions, this range collides with well-known ports that I use for other services (e.g. port 4160 for Jini discovery). While rare, this has caused real problems. Is there any easy way to tell Windows or Java to use a different port range for client sockets? Microsoft's docs indicate that I can change the high end of that range via the MaxUserPort TcpIP registry setting, but I see no way to change the low end. Update: I've made some progress on this. It looks like Microsoft has a concept of reserved ports that are exceptions to the ephemeral port range. There's a registry setting that lets you change this permanently and apparently there must be an API to do the same thing because there's a data structure that holds high/low values for reserved port ranges, but I can't find the actual function call anywhere... The registry solution may work, but now I'm fixated on this API.

    Read the article

  • Why can't I include these data files in a Python distribution using distutils?

    - by froadie
    I'm writing a setup.py file for a Python project so that I can distribute it. The aim is to eventually create a .egg file, but I'm trying to get it to work first with distutils and a regular .zip. This is an eclipse pydev project and my file structure is something like this: ProjectName src somePackage module1.py module2.py ... config propsFile1.ini propsFile2.ini propsFile3.ini setup.py Here's my setup.py code so far: from distutils.core import setup setup(name='ProjectName', version='1.0', packages=['somePackage'], data_files = [('config', ['..\config\propsFile1.ini', '..\config\propsFile2.ini', '..\config\propsFile3.ini'])] ) When I run this (with sdist as a command line parameter), a .zip file gets generated with all the python files - but the config files are not included. I thought that this code: data_files = [('config', ['..\config\propsFile1.ini', '..\config\propsFile2.ini', '..\config\propsFile3.ini'])] indicates that those 3 specified config files should be copied to a "config" directory in the zip distribution. Why is this code not accomplishing anything? What am I doing wrong? (I have also tried playing around with the paths of the config files... But nothing seems to help. Would Python throw an error or warning if the path was incorrect / file was not found?)

    Read the article

  • Exec problem in SQL Server 2005

    - by IordanTanev
    Hi, I have the situation where i have two databases with same structure. The first have some data in its data tables. I need to create a script that will transfer the data from the first database to the second. I have created this script. DECLARE @table_name nvarchar(MAX), @query nvarchar(MAX) DECLARE @table_cursor CURSOR SET @table_cursor = CURSOR FAST_FORWARD FOR SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES OPEN @table_cursor FETCH NEXT FROM @table_cursor INTO @table_name WHILE @@FETCH_STATUS = 0 BEGIN SET @query = 'INSERT INTO ' + @table_name + ' SELECT * FROM MyDataBase.dbo.' + @table_name print @query exec @query FETCH NEXT FROM @table_cursor INTO @table_name END CLOSE @table_cursor DEALLOCATE @table_cursor The problem is that when I run the script the "print @query" statement prints statement like this INSERT INTO table SELECT * FROM MyDataBase.dbo.table When I copy this and run it from Management studio it works fine. But when the script tries to run it with exec I get this error Msg 911, Level 16, State 1, Line 21 Could not locate entry in sysdatabases for database 'INSERT INTO table SELECT * FROM MPDEV090314'. No entry found with that name. Make sure that the name is entered correctly. Hope someone can tell me whot is wront with this. Best Regards, Iordan Tanev

    Read the article

  • ConcurrentLinkedQueue$Node remains in heap after remove()

    - by action8
    I have a multithreaded app writing and reading a ConcurrentLinkedQueue, which is conceptually used to back entries in a list/table. I originally used a ConcurrentHashMap for this, which worked well. A new requirement required tracking the order entries came in, so they could be removed in oldest first order, depending on some conditions. ConcurrentLinkedQueue appeared to be a good choice, and functionally it works well. A configurable amount of entries are held in memory, and when a new entry is offered when the limit is reached, the queue is searched in oldest-first order for one that can be removed. Certain entries are not to be removed by the system and wait for client interaction. What appears to be happening is I have an entry at the front of the queue that occurred, say 100K entries ago. The queue appears to have the limited number of configured entries (size() == 100), but when profiling, I found that there were ~100K ConcurrentLinkedQueue$Node objects in memory. This appears to be by design, just glancing at the source for ConcurrentLinkedQueue, a remove merely removes the reference to the object being stored but leaves the linked list in place for iteration. Finally my question: Is there a "better" lazy way to handle a collection of this nature? I love the speed of the ConcurrentLinkedQueue, I just cant afford the unbounded leak that appears to be possible in this case. If not, it seems like I'd have to create a second structure to track order and may have the same issues, plus a synchronization concern.

    Read the article

  • Injecting correct object graph using StructureMap in Queue of different Objects

    - by davy
    I have a queuing service that has to inject a different dependency graph depending on the type of object in the queue. I'm using Structure Map. So, if the object in the queue is TypeA the concrete classes for TypeA are used and if it's TypeB, the concrete classes for TypeB are used. I'd like to avoid code in the queue like: if (typeA) { // setup TypeA graph } else if (typeB) { // setup TypeB graph } Within the graph, I also have a generic classes such as an IReader(ISomething, ISpomethingElse) where IReader is generic but needs to inject the correct ISomething and ISomethingElse for the type. ISomething will also have dependencies and so on. Currently I create a TypeA or TypeB object and inject a generic Processor class using StructureMap into it and then pass a factory manually inject a TypeA or TypeB factory into a method like: Processor.Process(new TypeAFactory) // perhaps I should have an abstract factory... However, because the factory then creates the generic IReader mentioned above, I end up manually injecting all the TypeA or TypeB classes fro there on. I hope enough of this makes sense. I am new to StructureMap and was hoping somebody could point me in the right direction here for a flexible and elegant solution. Thanks

    Read the article

< Previous Page | 854 855 856 857 858 859 860 861 862 863 864 865  | Next Page >