Search Results

Search found 21115 results on 845 pages for 'byte order'.

Page 267/845 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • wordpress generating slow mysql queries - is it index problem?

    - by tash
    Hello Stack Overflow I've got very slow Mysql queries coming up from my wordpress site. It's making everything slow and I think this is eating up CPU usage. I've pasted the Explain results for the two most frequently problematic queries below. This is a typical result - although very occasionally teh queries do seem to be performed at a more normal speed. I have the usual wordpress indexes on the database tables. You will see that one of the queries is generated from wordpress core code, and not from anything specific - like the theme - for my site. I have a vague feeling that the database is not always using the indexes/is not using them properly... Is this right? Does anyone know how to fix it? Or is it a different problem entirely? Many thanks in advance for any help anyone can offer - it is hugely appreciated Query: [wp-blog-header.php(14): wp()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort Query time: 34.2829 (ms) 9) Query: [wp-content/themes/LMHR/index.php(40): query_posts()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('217', '218', '223', '224') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort 2 DEPENDENT SUBQUERY tr ref PRIMARY,term_taxonomy_id PRIMARY 8 func 1 Using index 2 DEPENDENT SUBQUERY tt eq_ref PRIMARY,term_id_taxonomy,taxonomy PRIMARY 8 antin1_lovemusic2010.tr.term_taxonomy_id 1 Using where Query time: 70.3900 (ms)

    Read the article

  • forms problem in django 1.1

    - by alexarsh
    I have the following form: class ModuleItemForm2(forms.ModelForm): class Meta: model = Module_item fields = ('title', 'media', 'thumb', 'desc', 'default', 'player_option') The model is: class Module_item(models.Model): title = models.CharField(max_length=100) layout = models.CharField(max_length=5, choices=LAYOUTS_CHOICE) media = models.CharField(help_text='Media url', max_length=500, blank=True, null=True) conserv = models.ForeignKey(Conserv, help_text= 'Redirect to Conserv', blank=True, null=True) conserve_section = models.CharField(max_length=100, help_text= 'Section within the redirected Conserv', blank=True, null=True) parent = models.ForeignKey('self', help_text='Upper menu.', blank=True, null=True) module = models.ForeignKey(Module, blank=True, null=True) thumb = models.FileField(upload_to='sms/module_items/thumbs', blank=True, null=True) desc = models.CharField(max_length=500, blank=True, null=True) auto_play = models.IntegerField(help_text='Auto start play (miliseconds)', blank=True, null=True) order = models.IntegerField(help_text='Display order', blank=True, null=True) depth = models.IntegerField(help_text='The layout depth', blank=True, null=True) flow_replace = models.IntegerField(blank=True, null=True) default = models.IntegerField(help_text='The selected sub item (Note: Starting from 0)', blank=True, null=True) player_options = models.CharField(max_length=1000, null=True, blank=True) In my view I build form: module_item_form2 = ModuleItemForm2() print module_item_form2 And I get the following error on the print line: 'NoneType' object has no attribute 'label' It works fine with django 1.0.2. I see the error only in django 1.1. Do you have an idea what am I doing wrong? Regards, Arshavski Alexander.

    Read the article

  • How to populate Range variable from a Sub/Function call?

    - by Ken Ingram
    I am trying to get this sub to work but the operationalRange variable is not being assigned. Despite the fact that the function selectBodyRow(bodyName) works fine. Sub sortRows(bodyName As String, ByRef wksht As Worksheet) Dim operationalRange As Range Set operationalRange = selectBodyRow(bodyName) Debug.Print "Sorting Worksheet: " & wksht.Name If Not operationalRange Is Nothing Then operationalRange.Select Debug.Print "Sorting " & operationalRange.Count & "Rows." ActiveWorkbook.Worksheets(wksht.Name).Sort.SortFields.Clear ActiveWorkbook.Worksheets(wksht.Name).Sort.SortFields.Add Key:=operationalRange, _ SortOn:=xlSortOnValues, Order:=xlAscending, DataOption:=xlSortNormal ActiveWorkbook.Worksheets(wksht.Name).Sort.SortFields.Add Key:=operationalRange, _ SortOn:=xlSortOnValues, Order:=xlAscending, DataOption:=xlSortNormal With ActiveWorkbook.Worksheets(wksht.Name).Sort .SetRange operationalRange .Header = xlGuess .MatchCase = False .Orientation = xlTopToBottom .SortMethod = xlPinYin .Apply End With Else MsgBox "Body is not being Set" End If End Sub The Sub being called by the above Sub is: Function selectBodyRow(bodyName As String) As Range Dim rangeStart As String, rangeEnd As String Dim selectionStart As Range, selectionEnd As Range Dim result As Range, srchRng As Range, cngrs As Variant If bodyName = "WEST" Then rangeStart = "<-WEST START->" rangeEnd = "<-WEST END->" ElseIf bodyName = "EAST" Then rangeStart = "<-EAST START->" rangeEnd = "<-EAST END->" End If Set srchRng = Range("A:A") srchRng.Select Set selectionStart = srchRng.Find(What:=rangeStart, After:=ActiveCell, LookIn _ :=xlValues, LookAt:=xlPart, SearchOrder:=xlByRows, SearchDirection:= _ xlNext, MatchCase:=False, SearchFormat:=False) Set selectionEnd = srchRng.Find(What:=rangeEnd, After:=ActiveCell, LookIn _ :=xlValues, LookAt:=xlPart, SearchOrder:=xlByRows, SearchDirection:= _ xlNext, MatchCase:=False, SearchFormat:=False) Set result = Range(selectionStart.Offset(1, 0), selectionEnd.Offset(-1, 0)) result.EntireRow.Select End Function

    Read the article

  • How to give points for each indices of list

    - by Eric Jung
    def voting_borda(rank_ballots): '''(list of list of str) -> tuple of (str, list of int) The parameter is a list of 4-element lists that represent rank ballots for a single riding. The Borda Count is determined by assigning points according to ranking. A party gets 3 points for each first-choice ranking, 2 points for each second-choice ranking and 1 point for each third-choice ranking. (No points are awarded for being ranked fourth.) For example, the rank ballot shown above would contribute 3 points to the Liberal count, 2 points to the Green count and 1 point to the CPC count. The party that receives the most points wins the seat. Return a tuple where the first element is the name of the winning party according to Borda Count and the second element is a four-element list that contains the total number of points for each party. The order of the list elements corresponds to the order of the parties in PARTY_INDICES.''' #>>> voting_borda([['GREEN','NDP', 'LIBERAL', 'CPC'], ['GREEN','CPC','LIBERAL','NDP'], ['LIBERAL','NDP', 'CPC', 'GREEN']]) #('GREEN',[4, 6, 5, 3]) list_of_party_order = [] for sublist in rank_ballots: for party in sublist[0]: if party == 'GREEN': GREEN_COUNT += 3 elif party == 'NDP': NDP_COUNT += 3 elif party == 'LIBERAL': LIBERAL_COUNT += 3 elif party == 'CPC': CPC_COUNT += 3 for party in sublist[1]: if party == 'GREEN': GREEN_COUNT += 2 elif party == 'NDP': NDP_COUNT += 2 elif party == 'LIBERAL': LIBERAL_COUNT += 2 elif party == 'CPC': CPC_COUNT += 2 for party in sublist[2]: if party == 'GREEN': GREEN_COUNT += 1 elif party == 'NDP': NDP_COUNT += 1 elif party == 'LIBERAL': LIBERAL_COUNT += 1 elif party == 'CPC': CPC_COUNT += 1 I don't know how I would give points for each indices of the list MORE SIMPLY. Can someone please help me? Without being too complicated. Thank you!

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • Maven3 Issues with building a multi-module enterprise project

    - by Sujit K
    I just migrated from Maven2 to Maven3 and I'm able to build each module individually or all the modules in one shot by calling mvn clean install. However, in Maven2, since we have multi-module enterprise project, we build multiple ear's and each ear is built as its own module with its own child pom. To build an individual ear with its dependents, the below command works fine in Maven2 but not in Maven3. Let me explain the issue in Maven3 a bit later. mvn -pl ear_module -rf first_dependent_module -am clean install In Maven2 when the reactor lists the build order, I see first_dependent_module second_dependent_module ear_module End of the day I have my ear module also part of the reactor which is how it should be. The reason we call -rf is we don't want to delete the target folder at the main ${project.basedir} (so not to delete the output created in target from building the other ear modules). With Maven3, however, this is all I see when the reactor lists the build order: first_dependent_module second_dependent_module Maven3 totally ignores the argument (ear_module) set to -pl flag to be also built after its dependents have been. Not sure what I'm missing here. Any help/tips would be greatly appreciated. P.S: The build I'm making is similar to the one below.... Build specific module in multi-module project Thanks, SK

    Read the article

  • Iterating through Event Log Entry Collection, IndexOutOutOfBoundsException

    - by fjdumont
    Hello, in a service application I am iterating through the Windows application event log to parse Events in order react depanding on the entry message. In the case that the event log is full (Windows usually makes sure there is enough space by deleting old entries - this is configurable in the eventvwr.exe settings), the service always runs into an IndexOutOfBoundsException while iterating through the EventLog.Entries collection. No matter how I iterate (for-loop, using the collections enumerator, copying the collection into an array, ...), I can't seem to get rid of this ´bug´. Currently, I ensure that the log is not full in order to keep the service running by regularly deleting the last few item by parsing the event log file and deleting the last few nodes (Don't beat me up, I couldn't find a better alternative...). How can I iterate through the collection without trying to access already deleted entries? Is there probably a more elegant method? I am only trying to acces the logs written during the last x seconds (even LINQ failed to select those when the log is full - same exception), could this help? Thanks for any advice and hints Frank Edit: I forgot to mention that my assumption is the loops are accessing entries which are being deleted during iteration by Windows. Basically that is why I tried to clone the collection. Is there perhaps a way to lock the collection for a small amount of time for just my application?

    Read the article

  • How do I build a filtered_streambuf based on basic_streambuf?

    - by swestrup
    I have a project that requires me to insert a filter into a stream so that outgoing data will be modified according to the filter. After some research, it seems that what I want to do is create a filtered_streambuf like this: template <class StreamBuf class filtered_streambuf: public StreamBuf { ... } And then insert a filtered_streambuf<> into whichever stream I need to be filtered. My problem is that I don't know what invariants I need to maintain while filtering a stream, in order to ensure that Derived classes can work as expected. In particular, I may find I have filtered_streambufs built over other filtered_streambufs. All the various stream inserters, extractors and manipulators work as expected. The trouble is that I just can't seem to work out what the minimal interface is that I need to supply in order to guarantee that an iostream will have what it needs to work correctly. In particular, do I need to fake the movement of the protected pointer variables, or not? Do I need a fake data buffer, or not? Can I just override the public functions, rewriting them in terms of the base streambuf, or is that too simplistic?

    Read the article

  • Listening for TCP and UDP requests on the same port

    - by user339328
    I am writing a Client/Server set of programs Depending on the operation requested by the client, I use make TCP or UDP request. Implementing the client side is straight-forward, since I can easily open connection with any protocol and send the request to the server-side. On the servers-side, on the other hand, I would like to listen both for UDP and TCP connections on the same port. Moreover, I like the the server to open new thread for each connection request. I have adopted the approach explained in: link text I have extended this code sample by creating new threads for each TCP/UDP request. This works correctly if I use TCP only, but it fails when I attempt to make UDP bindings. Please give me any suggestion how can I correct this. tnx Here is the Server Code: public class Server { public static void main(String args[]) { try { int port = 4444; if (args.length > 0) port = Integer.parseInt(args[0]); SocketAddress localport = new InetSocketAddress(port); // Create and bind a tcp channel to listen for connections on. ServerSocketChannel tcpserver = ServerSocketChannel.open(); tcpserver.socket().bind(localport); // Also create and bind a DatagramChannel to listen on. DatagramChannel udpserver = DatagramChannel.open(); udpserver.socket().bind(localport); // Specify non-blocking mode for both channels, since our // Selector object will be doing the blocking for us. tcpserver.configureBlocking(false); udpserver.configureBlocking(false); // The Selector object is what allows us to block while waiting // for activity on either of the two channels. Selector selector = Selector.open(); tcpserver.register(selector, SelectionKey.OP_ACCEPT); udpserver.register(selector, SelectionKey.OP_READ); System.out.println("Server Sterted on port: " + port + "!"); //Load Map Utils.LoadMap("mapa"); System.out.println("Server map ... LOADED!"); // Now loop forever, processing client connections while(true) { try { selector.select(); Set<SelectionKey> keys = selector.selectedKeys(); // Iterate through the Set of keys. for (Iterator<SelectionKey> i = keys.iterator(); i.hasNext();) { SelectionKey key = i.next(); i.remove(); Channel c = key.channel(); if (key.isAcceptable() && c == tcpserver) { new TCPThread(tcpserver.accept().socket()).start(); } else if (key.isReadable() && c == udpserver) { new UDPThread(udpserver.socket()).start(); } } } catch (Exception e) { e.printStackTrace(); } } } catch (Exception e) { e.printStackTrace(); System.err.println(e); System.exit(1); } } } The UDPThread code: public class UDPThread extends Thread { private DatagramSocket socket = null; public UDPThread(DatagramSocket socket) { super("UDPThread"); this.socket = socket; } @Override public void run() { byte[] buffer = new byte[2048]; try { DatagramPacket packet = new DatagramPacket(buffer, buffer.length); socket.receive(packet); String inputLine = new String(buffer); String outputLine = Utils.processCommand(inputLine.trim()); DatagramPacket reply = new DatagramPacket(outputLine.getBytes(), outputLine.getBytes().length, packet.getAddress(), packet.getPort()); socket.send(reply); } catch (IOException e) { e.printStackTrace(); } socket.close(); } } I receive: Exception in thread "UDPThread" java.nio.channels.IllegalBlockingModeException at sun.nio.ch.DatagramSocketAdaptor.receive(Unknown Source) at server.UDPThread.run(UDPThread.java:25) 10x

    Read the article

  • Jetty 7 will not allow me to customize a session cookie path

    - by Bob Obringer
    Using Jetty 7.0.2, I am unable to set a custom session cookie path. I am hosting multiple sites on the same server using apache to proxy requests to the proper context. (replaced http as htp as stackoverflow thinks my multiple links might be spam) <VirtualHost *:80> ServerName context.domain.com ProxyRequests On ProxyPreserveHost Off <Proxy *:80> Order deny,allow Allow from 127.0.0.1 </Proxy> ProxyPass / htp://localhost:8080/context/ ProxyPassReverse / htp://localhost:8080/context/ <Location /> Order allow,deny Allow from all </Location> </VirtualHost> Jetty is running on the same server on port 8080 and my context is available @ /context The user accesses the application @ htp://context.domain.com but jetty is setting the path for the session cookie @ /context. This prevents the browser from accessing the cookie since the the actual path to the context is not being used. I need to override Jetty's default setting to set the cookie for the context, and set the path at the root ( / ). In my Jetty's webdefault.xml I have the following, which is partially working: <context-param> <param-name>org.eclipse.jetty.servlet.SessionCookie</param-name> <param-value>CustomCookieName</param-value> </context-param> <context-param> <param-name>org.eclipse.jetty.servlet.SessionPath</param-name> <param-value>/</param-value> </context-param> The cookie is properly set with a custom name, but it is NOT setting the SessionPath. No matter what I set the value to... it refuses to set a cookie at any path but /context. This has been driving me crazy so any help would be greatly appreciated.

    Read the article

  • Getting the avg of the top 10 students from each school

    - by dave
    Hi all -- We have a school district with 38 elementary schools. The kids took a test. The averages for the schools are widely dispersed, but I want to compare the averages of JUST THE TOP 10 students from each school. Requirement: use temporary tables only. I have done this in a very work-intensive, error-prone sort of way as follows. (sch_code = e.g., 9043; -- schabbrev = e.g., "Carter"; -- totpct_stu = e.g., 61.3) DROP TEMPORARY TABLE IF EXISTS avg_top10 ; CREATE TEMPORARY TABLE avg_top10 ( sch_code VARCHAR(4), schabbrev VARCHAR(75), totpct_stu DECIMAL(5,1) ); INSERT INTO avg_top10 SELECT sch_code , schabbrev , totpct_stu FROM test_table WHERE sch_code IN ('5489') ORDER BY totpct_stu DESC LIMIT 10; -- I do that last query for EVERY school, so the total -- length of the code is well in excess of 300 lines. -- Then, finally... SELECT schabbrev, ROUND( AVG( totpct_stu ), 1 ) AS top10 FROM avg_top10 GROUP BY schabbrev ORDER BY top10 ; -- OUTPUT: ----------------------------------- schabbrev avg_top10 ---------- --------- Goulding 75.4 Garth 77.7 Sperhead 81.4 Oak_P 83.7 Spring 84.9 -- etc... Question: So this works, but isn't there a lot better way to do it? Thanks! PS -- Looks like homework, but this is, well...real.

    Read the article

  • Rendering blocks side-by-side with FOP

    - by Rolf
    I need to generate a PDF from XML data using Apache FOP. The problem is that FOP doesn't support fo:float, and I really need to have items (boxes of rendered data) side-by-side in the PDF. More precisely, I need them in a 4x4 grid on each page, like so: In HTML, I would simply render these as left-floated divs with appropriate widths and heights. My data looks something like this: <item id="1"> <a>foo</a> <b>bar</b> <c>baz</c> </item> <item id="2">...</item> ... <item id="n">...</item> I considered using a two-column region-body, but then the order of items would be 1, 3, 2, 4 (reading from left to right) since they would be rendered tb-lr instead of lr-tb, and I need them to be in the correct order (id in above xml). I suppose I could try using a table, but I'm not quite sure how to group the items into table rows. So, some kind of workaround for the lack of fo:float would be greatly appreciated.

    Read the article

  • MySQL Paritioning performance

    - by Imran Pathan
    Measured performance on key partitioned tables and normal tables separately. But we couldn't find any performance improvement with partitioning. Queries are pruned. Using MySQL 5.1.47 on RHEL 4. Table details: UserUsage - Will have entries for user mobile number and data usage for each date. Mobile number and Date as PRI KEY. UserProfile - Queries prev table and stores summary for each mobile number. Mobile number PRI KEY. CREATE TABLE `UserUsage` ( `Msisdn` decimal(20,0) NOT NULL, `Date` date NOT NULL, . . PRIMARY KEY USING BTREE (`Msisdn`,`Date`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; CREATE TABLE `UserProfile` ( `Msisdn` decimal(20,0) NOT NULL, . . PRIMARY KEY (`Msisdn`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; Second table is updated by query select and order by date in first table in a perl program, query is select * from UserUsage where Msisdn=number order by Date desc limit 7 [Process data in perl] update UserProfile values(....) where Msisdn=number explain partition for select, shows row being scanned in a particular partition only. Is something wrong with partition design or queries as partitioning is taking almost same or more time compared to normal tables?

    Read the article

  • How can I pass in a params of Expression<Func<T, object>> to a method?

    - by Pure.Krome
    Hi folks, I have the following two methods :- public static IQueryable<T> IncludeAssociations<T>(this IQueryable<T> source, params string[] associations) { ... } public static IQueryable<T> IncludeAssociations<T>(this IQueryable<T> source, params Expression<Func<T, object>>[] expressions) { ... } Now, when I try and pass in a params of Expression<Func<T, object>>[], it always calls the first method (the string[]' and of course, that value isNULL`) Eg. Expression<Func<Order, object>> x1 = x => x.User; Expression<Func<Order, object>> x2 = x => x.User.Passport; var foo = _orderRepo .Find() .IncludeAssociations(new {x1, x2} ) .ToList(); Can anyone see what I've done wrong? Why is it thinking my params are a string? Can I force the type, of the 2x variables?

    Read the article

  • Visual Studio 2010 / ASP.NET MVC 2 / Publish

    - by SevenCentral
    I just did a clean install on Windows 7 x64 Professional with the final release of Visual Studio 2010 Premium. In order to duplicate what I'm experiencing do the following in: Create a new ASP.NET MVC 2 Web Application Right click the project and select Properties On the Web tab, select "Use Local IIS Web Server" Click on Create Virtual Directory Save all Unload the project Edit the project file Change MvcBuildViews to true Save all Reload project Right click the project and select Publish Choose the file system publish method Enter a target location Choose Delete all existing files Select Publish Right click the project Select Publish Each time I do the above I get the following errror: "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level..." The error originates from obj\debug\package\packagetmp\web.config, relative to the project directory. I can repeat this all day long with any MVC 2 project I've built. In order to fix this problem, I need to set MvcBuildViews to false in the project file. That's not really an option. This wasn't a problem in Visual Studio 2008 and it seems to be an issue with the way the Publish command stages files beneath the project directory. Can anyone else duplicate this error? Is this a bug or by design? Is there a fix, workaround, etc...? Thanks.

    Read the article

  • Retaining parameters in ASP.NET MVC

    - by MapDot
    Many MVC frameworks (e.g. PHP's Zend Framework) have a way of providing basic state management through URLs. The basic principle is this: Any parameters that were not explicitly modified or un-set get copied into every URL For instance, consider a listing with pagination. You'll have the order, direction and page number passed as URL parameters. You may also have a couple of filters. Changing the value of a filter should not alter the sort order. ASP.net MVC seems to remember your controller and action by default: <%: Html.RouteLink("Next", "MyRoute", new {id = next.ItemId}) %> This will not re-set your action or controller. However, it does seem to forget all other parameters. The same is true of ActionLink. Parameters that get set earlier on in your URL seem to get retained as well. Is there a way to make it retain more than that? For instance, this does not seem to affect any of the links being generated: RouteValues.RouteData.Values["showDeleted"] = true;

    Read the article

  • How to specify allowed exceptions in WCF's configuration file?

    - by tucaz
    Hello! I´m building a set of WCF services for internal use through all our applications. For exception handling I created a default fault class so I can return treated message to the caller if its the case or a generic one when I have no clue what happened. Fault contract: [DataContract(Name = "DefaultFault", Namespace = "http://fnac.com.br/api/2010/03")] public class DefaultFault { public DefaultFault(DefaultFaultItem[] items) { if (items == null || items.Length== 0) { throw new ArgumentNullException("items"); } StringBuilder sbItems = new StringBuilder(); for (int i = 0; i Specifying that my method can throw this exception so the consuming client will be aware of it: [OperationContract(Name = "PlaceOrder")] [FaultContract(typeof(DefaultFault))] [WebInvoke(UriTemplate = "/orders", BodyStyle = WebMessageBodyStyle.Bare, ResponseFormat = WebMessageFormat.Json, RequestFormat = WebMessageFormat.Json, Method = "POST")] string PlaceOrder(Order newOrder); Most of time we will use just .NET to .NET communication with usual binds and everything works fine since we are talking the same language. However, as you can see in the service contract declaration I have a WebInvoke attribute (and a webHttp binding) in order to be able to also talk JSON since one of our apps will be built for iPhone and this guy will talk JSON. My problem is that whenever I throw a FaultException and have includeExceptionDetails="false" in the config file the calling client will get a generic HTTP error instead of my custom message. I understand that this is the correct behavior when includeExceptionDetails is turned off, but I think I saw some configuration a long time ago to allow some exceptions/faults to pass through the service boundaries. Is there such thing like this? If not, what do u suggest for my case? Thanks a LOT!

    Read the article

  • ZFS, dedupe and PST files

    - by Unreason
    I am interested to know what would be expected maximum dedupe ratio for a set of PST files. I have ~40G of pst files from ~15 usres with high level of duplication of attachments. I am running tests to see if I can have significant space savings if I store the data on ZFS with dedupe. For this purpose I have installed a test setup of Nexenta, but was wondering if someone here had already done this and what level of deduplication I might expect (or in another words how sensitive are pst files to block alignment and what are the parameters that can influence the ratio?). Initial test show very low dedupe ratio and I did find explanation that block level dedupe would not be efficient here and that byte level dedupe would be much better (and that it should be performed by application that is aware of internal organization), so I am just double checking here if someone have some more input. Otherwise I will probably be converting PST files to IMAP.

    Read the article

  • Replace string with incremented value

    - by Andrei
    Hello, I'm trying to write a CSS parser to automatically dispatch URLs in background images to different subdomains in order to parallelize downloads. Basically, I want to replace things like url(/assets/some-background-image.png) with url(http://assets[increment].domain.com/assets/some-background-image.png) I'm using this inside a class that I eventually want to evolve into doing various CSS parsing tasks. Here are the relevant parts of the class : private function parallelizeDownloads(){ static $counter = 1; $newURL = "url(http://assets".$counter.".domain.com"; The counter needs to be reset when it reaches 4 in order to limit to 4 subdomains. if ($counter == 4) { $counter = 1; } $counter ++; return $newURL; } public function replaceURLs() { This is mostly nonsense, but I know the code I'm looking for looks somewhat like this. Note : $this-css contains the CSS string. preg_match("/url/i",$this->css,$match); foreach($match as $URL) { $newURL = self::parallelizeDownloads(); $this->css = str_replace($match, $newURL,$this->css); } }

    Read the article

  • Service reference not generating client types

    - by Cranialsurge
    I am trying to consume a WCF service in a class library by adding a service reference to it. In one of the class libraries it gets consumed properly and I can access the client types in order to generate a proxy off of them. However in my second class library (or even in a console test app), when i add the same service reference, it only exposes the types that are involved in the contract operations and not the client type for me to generate a proxy against. e.g. Endpoint has 2 services exposed - ISvc1 and ISvc2. When I add a service reference to this endpoint in the first class library I get ISvc1Client andf ISvc2Client to generate proxies off of in order to use the operations exposed via those 2 contracts. In addition to these clients the service reference also exposes the types involved in the operations like (type 1, type 2 etc.) this is what I need. However when i try to add a service reference to the same endpoing in another console application or class library only Type 1, Type 2 etc. are exposed and not ISvc1Client and ISvc2Client because of which I cannot generate a proxy to access the operations I need. I am unable to determine why the service reference gets properly generated in one class library but not in the other or the test console app.

    Read the article

  • Queries within queries: Is there a better way?

    - by mririgo
    As I build bigger, more advanced web applications, I'm finding myself writing extremely long and complex queries. I tend to write queries within queries a lot because I feel making one call to the database from PHP is better than making several and correlating the data. However, anyone who knows anything about SQL knows about JOINs. Personally, I've used a JOIN or two before, but quickly stopped when I discovered using subqueries because it felt easier and quicker for me to write and maintain. Commonly, I'll do subqueries that may contain one or more subqueries from relative tables. Consider this example: SELECT (SELECT username FROM users WHERE records.user_id = user_id) AS username, (SELECT last_name||', '||first_name FROM users WHERE records.user_id = user_id) AS name, in_timestamp, out_timestamp FROM records ORDER BY in_timestamp Rarely, I'll do subqueries after the WHERE clause. Consider this example: SELECT user_id, (SELECT name FROM organizations WHERE (SELECT organization FROM locations WHERE records.location = location_id) = organization_id) AS organization_name FROM records ORDER BY in_timestamp In these two cases, would I see any sort of improvement if I decided to rewrite the queries using a JOIN? As more of a blanket question, what are the advantages/disadvantages of using subqueries or a JOIN? Is one way more correct or accepted than the other?

    Read the article

  • 2-Way databinding with Entity Framework and WPF DataGrid , is Possible ?

    - by Panindra
    i am working on POS application using SQL CE , WPF , Entity framework 3.5sp2 and iam trying to use data grid as my Order Entry Control for User to enter Products Order . Iam plannning to bind this to enitiy frmae work model , abd looking for 2 way updating ? private void button1_Click(object sender, RoutedEventArgs e) { using (MasterEntities nwEntities = new MasterEntities()) { var users = from d in nwEntities.Companies select new { d.CompanyId, d.CompanyName, d.Place }; listBox1.DataContext = users; dataGrid1.DataContext = users; // foreach (String c in customers) // { // MessageBox.Show(c.ToString()); // } } } When try to double clikc on the datagrid it through s a error with Caption " Invalid Operation Execption was unhandled " and Message " A TwoWay or OneWayToSource binding cannot work on the read-only property 'CompanyId' of type '<f__AnonymousType0`3[System.Int32,System.String,System.String]'. whats wrong here and my xaml coding goes like this <Grid> <ListBox Name="listBox1" ItemsSource="{Binding}" /> <Button Content="Show " Name="button1" Click="button1_Click" /> <DataGrid AutoGenerateColumns="False" Name="dataGrid1" ItemsSource="{Binding}" > <DataGrid.Columns> <DataGridTextColumn Header=" ID" Binding="{Binding CompanyId}"/> <DataGridTextColumn Header="Company Name" Binding="{Binding CompanyName}"/> <DataGridTextColumn Header="Place" Binding="{Binding Place}" /> </DataGrid.Columns> </DataGrid> </Grid> EDITED : i made the changes shown by @vorrtex, But, then i added another button to save the chages and in button click event i added follwing code , butit showing Updating error private void button2_Click(object sender, RoutedEventArgs e) { nwEntities.SaveChanges(); }

    Read the article

  • Visual Studio reports that not all code path return a value, even though they do

    - by chris12892
    I have an API in NETMF C# that I am writing that includes a function to send an HTTP request. For those who are familiar with NETMF, this is a heavily modified version of the "webClient" example, which a simple application that demonstrates how to submit an HTTP request, and recive a response. In the sample, it simply prints the response and returns void,. In my version, however, I need it to return the HTTP response. For some reason, Visual Studio reports that not all code paths return a value, even though, as far as I can tell, they do. Here is my code... /// <summary> /// This is a modified webClient /// </summary> /// <param name="url"></param> private string httpRequest(string url) { // Create an HTTP Web request. HttpWebRequest request = HttpWebRequest.Create(url) as HttpWebRequest; // Set request.KeepAlive to use a persistent connection. request.KeepAlive = true; // Get a response from the server. WebResponse resp = request.GetResponse(); // Get the network response stream to read the page data. if (resp != null) { Stream respStream = resp.GetResponseStream(); string page = ""; byte[] byteData = new byte[4096]; char[] charData = new char[4096]; int bytesRead = 0; Decoder UTF8decoder = System.Text.Encoding.UTF8.GetDecoder(); int totalBytes = 0; // allow 5 seconds for reading the stream respStream.ReadTimeout = 5000; // If we know the content length, read exactly that amount of // data; otherwise, read until there is nothing left to read. if (resp.ContentLength != -1) { for (int dataRem = (int)resp.ContentLength; dataRem > 0; ) { Thread.Sleep(500); bytesRead = respStream.Read(byteData, 0, byteData.Length); if (bytesRead == 0) throw new Exception("Data laes than expected"); dataRem -= bytesRead; // Convert from bytes to chars, and add to the page // string. int byteUsed, charUsed; bool completed = false; totalBytes += bytesRead; UTF8decoder.Convert(byteData, 0, bytesRead, charData, 0, bytesRead, true, out byteUsed, out charUsed, out completed); page = page + new String(charData, 0, charUsed); } page = new String(System.Text.Encoding.UTF8.GetChars(byteData)); } else throw new Exception("No content-Length reported"); // Close the response stream. For Keep-Alive streams, the // stream will remain open and will be pushed into the unused // stream list. resp.Close(); return page; } } Any ideas? Thanks...

    Read the article

  • wordpress 500 - Internal server error

    - by asad
    Hello Folks , I installed the wordpress 2.9.2 a few days ago and it works correctly. today , i want to use permlink feature of wordpress. I know , must modify my .htaccess file on my site root. but on my sub-domain root there is no any .htaccess file . so i create my .htacess file with follow content on sub-domain root (near index.php file): <files .htaccess> order allow,deny deny from all </files> ServerSignature Off <files wp-config.php> order allow,deny deny from all </files> # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress Options All -Indexes AddType x-mapp-php5 .php AddHandler x-mapp-php5 .php But after save it , i missed my blog . And i get follow error : 500 - Internal server error. There is a problem with the resource you are looking for, and it cannot be displayed. after this i remove the .htaccess file , but this was not correct. What i can do for it? Cheers

    Read the article

  • Linux: Limiting data throughput (pipe) in bytes per second?

    - by sdaau
    Hi all, I was wandering if there is a Linux program that can limit data throughput of a pipe - in actual bytes per second?. From what I gather, applicable for the purposes would be bfr, however, it has been removed from Debian (Removal candidate: bfr) cpipe, however, it seems the lowest resolution it will support is kB/s, meaning that buffer writes can still reach MB/s ([SOLVED] Is there a program to limit terminal pipe speed? - Page 2 - Ubuntu Forums) What I'd want is to be able to specify something like cat example.txt | ratelimit -Bps 100 > /dev/ttyUSB0 ... and actually have a single byte from example.txt sent each 1/100 = 0.01 sec (or 10 ms) to 'output'.. Thanks in advance for any suggestions, Cheers!

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >