Search Results

Search found 25727 results on 1030 pages for 'solution'.

Page 228/1030 | < Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >

  • mediaelement.js play/pause on click controls don't work

    - by iKode
    I need to play and pause the video player if any part of the video is clicked, so I implemented something like: $("#promoPlayer").click(function() { $("#promoPlayer").click(function() { if(this.paused){ this.play(); } else { this.pause(); } }); ...but now the controls of the video won't pause/play. You can see it in action here(http://175.107.134.113:8080/). The video is the second slide on the main carousel at the top. I suspect I'm now getting 2 onclick events one from my code above and a second on the actual pause/play button. I don't understand how the actual video controls work, because when I inspect the element, I just see a tag. If I could differentiate the controls, then perhaps I might have figured out a solution by now. Anyone know how I should have done this, or is my solution ok. If my solutions ok, how do I intercept a click on the control, so that I only have one click event to pause / play? If you look on the media element home page, they have a big play button, complete with mouseOver, but I can't see how they have done that? Can someone else help?

    Read the article

  • AnkhSVN: Cannot checkout Subsolution due to existing "versioned" folder

    - by lostiniceland
    Hello Everyone I am using Subversion since quite some time for Java-Development and I have setup a repository on my local NAS. Since I have a MSDN subscription via my company I recently installed Visual Studio 2010 to do a small project with .NET. According to some "best-practices" my project folder looks like the following. MySolution main.sln Services services.sln Service A files Service A Test files View projectfiles Persistence persistence.sln PersistenceXml files PersistenceXml Test files PersistenceDB files PersistenceDB Test files The idea is, that the main.sln only contains the projects for the application, meaning no test projects. The subsolutions, contain the project(s) and their corresponding testprojects. I was able to put all those projects under versioncontrol with AnkhSVN, so I have the same structure there in my trunk. Commiting changes was also no problem. Now I would like to check the this out on another machine. I was able to check out the main.sln which downloaded everything that was inside this solution. It skipped the services.sln, persistence.sln and all the test-projects. Until now everything is fine. Now, here comes the problem: when I am tryting to check out the subsolution (eg. services.sln) I get an error, I think it was UnsupportedOperation. I guess this happens because ankhsvn is tryting to download the folder Service A again and create ist hidden .svn folder which is already present. The only workaround I can think of by now is installing Tortoise SVN and check out the whole thing at once. It would be nicer though to have everything from within VS. Does anyone know how I can solve this? Is another client the only solution?

    Read the article

  • How does C#'s DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Given: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees "5/1/2010" to "5/4/2010." But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would allow the user to get the latest data and always pass the same end date to the sql proc--"5/5/2010" in this case. Please speak to this as well. Sample proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • Memorystream and Large Object Heap

    - by Flo
    I have to transfer large files between computers on via unreliable connections using WCF. Because I want to be able to resume the file and I don't want to be limited in my filesize by WCF, I am chunking the files into 1MB pieces. These "chunk" are transported as stream. Which works quite nice, so far. My steps are: open filestream read chunk from file into byet[] and create memorystream transfer chunk back to 2. until the whole file is sent My problem is in step 2. I assume that when I create a memory stream from a byte array, it will end up on the LOH and ultimately cause an outofmemory exception. I could not actually create this error, maybe I am wrong in my assumption. Now, I don't want to send the byte[] in the message, as WCF will tell me the array size is too big. I can change the max allowed array size and/or the size of my chunk, but I hope there is another solution. My actual question(s): Will my current solution create objects on the LOH and will that cause me problem? Is there a better way to solve this? Btw.: On the receiving side I simple read smaller chunks from the arriving stream and write them directly into the file, so no large byte arrays involved.

    Read the article

  • Counting string length in javascript and Ruby on Rails

    - by williamjones
    I've got a text area on a web site that should be limited in length. I'm allowing users to enter 255 characters, and am enforcing that limit with a Rails validation: validates_length_of :body, :maximum => 255 At the same time, I added a javascript char counter like you see on Twitter, to give feedback to the user on how many characters he has already used, and to disable the submit button when over length, and am getting that length in Javascript with a call like this: element.length Lastly, to enforce data integrity, in my Postgres database, I have created this field as a varchar(255) as a last line of defense. Unfortunately, these methods of counting characters do not appear to be directly compatible. Javascript counts the best, in that it counts what users consider as number of characters where everything is a single character. Once the submission hits Rails, however, all of the carriage returns have been converted to \r\n, now taking up 2 characters worth of space, which makes a close call fail Rails validations. Even if I were to handcode a different length validation in Rails, it would still fail when it hits the database I think, though I haven't confirmed this yet. What's the best way for me to make all this work the way the user would want? Best Solution: an approach that would enable me to meet user expectations, where each character of any type is only one character. If this means increasing the length of the varchar database field, a user should not be able to sneakily send a hand-crafted post that creates a row with more than 255 letters. Somewhat Acceptable Solution: a javascript change that enables the user to see the real character count, such that hitting return increments the counter 2 characters at a time, while properly handling all symbols that might have these strange behaviors.

    Read the article

  • How to execute PHPUnit?

    - by user1280667
    PHPUnit can execute script like this: phpunit --log-junit classname filename.php (i need the XML report , for my continus integreation platform) but my problem is that i work with a MVC framework and all pages are called through pathofproject/indexCLI.php module=moduleName class=className ect with 3 arguments in total(when i use the shell commande and path/index.php argum=... with url) so i cant call phpunit pathofproject/indexCLI.php module=moduleName class=className . So i think to a lot of solution , i hope you can help me to use one of them. first how can i use phpunit commande with this type of calling, because i cant do it because he is waiting a classname and a filename (default comportement) if it possible !! when i call the same link in shell like this : php path/indexCLI.php module="blabla" ect ... i have the result of assertion in my consol , but cant use XML Junit option , can i do it ? my last solution is to call the link in a navigator like mozzila , but i dont know how to say to phpunit runner to chose XML report and not HTML report. the aim for me , is to have a XML report .

    Read the article

  • Introduction to vectorizing in MATLAB - any good tutorials?

    - by Gacek
    I'm looking for any good tutorials on vectorizing (loops) in MATLAB. I have quite simple algorithm, but it uses two for loops. I know that it should be simple to vectorize it and I would like to learn how to do it instead of asking you for the solution. But to let you know what problem I have, so you would be able to suggest best tutorials that are showing how to solve similar problems, here's the outline of my problem: B = zeros(size(A)); % //A is a given matrix. for i=1:size(A,1) for j=1:size(A,2) H = ... %// take some surrounding elements of the element at position (i,j) (i.e. using mask 3x3 elements) B(i,j) = computeSth(H); %// compute something on selected elements and place it in B end end So, I'm NOT asking for the solution. I'm asking for a good tutorials, examples of vectorizing loops in MATLAB. I would like to learn how to do it and do it on my own.

    Read the article

  • Event feed implementation - will it scale?

    - by SlappyTheFish
    Situation: I am currently designing a feed system for a social website whereby each user has a feed of their friends' activities. I have two possible methods how to generate the feeds and I would like to ask which is best in terms of ability to scale. Events from all users are collected in one central database table, event_log. Users are paired as friends in the table friends. The RDBMS we are using is MySQL. Standard method: When a user requests their feed page, the system generates the feed by inner joining event_log with friends. The result is then cached and set to timeout after 5 minutes. Scaling is achieved by varying this timeout. Hypothesised method: A task runs in the background and for each new, unprocessed item in event_log, it creates entries in the database table user_feed pairing that event with all of the users who are friends with the user who initiated the event. One table row pairs one event with one user. The problems with the standard method are well known – what if a lot of people's caches expire at the same time? The solution also does not scale well – the brief is for feeds to update as close to real-time as possible The hypothesised solution in my eyes seems much better; all processing is done offline so no user waits for a page to generate and there are no joins so database tables can be sharded across physical machines. However, if a user has 100,000 friends and creates 20 events in one session, then that results in inserting 2,000,000 rows into the database. Question: The question boils down to two points: Is this worst-case scenario mentioned above problematic, i.e. does table size have an impact on MySQL performance and are there any issues with this mass inserting of data for each event? Is there anything else I have missed?

    Read the article

  • What is the easiest way to add compression to WCF in Silverlight?

    - by caryden
    I have a silverlight 2 beta 2 application that accesses a WCF web service. Because of this, it currently can only use basicHttp binding. The webservice will return fairly large amounts of XML data. This seems fairly wasteful from a bandwidth usage standpoint as the response, if zipped, would be smaller by a factor of 5 (I actually pasted the response into a txt file and zipped it.). The request does have the "Accept-Encoding: gzip, deflate" - Is there any way have the WCF service gzip (or otherwise compress) the response? I did find this link but it sure seems a bit complex for functionality that should be handled out-of-the-box IMHO. OK - at first I marked the solution using the System.IO.Compression as the answer as I could never "seem" to get the IIS7 dynamic compression to work. Well, as it turns out: Dynamic Compression on IIS7 was working al along. It is just that Nikhil's Web Developer Helper plugin for IE did not show it working. My guess is that since SL hands the web service call off to the browser, that the browser handles it "under the covers" and Nikhil's tool never sees the compressed response. I was able to confirm this by using Fiddler which monitors traffic external to the browser application. In fiddler, the response was, in fact, gzip compressed!! The other problem with the System.IO.Compression solution is that System.IO.Compression does not exist in the Silverlight CLR. So from my perspective, the EASIEST way to enable WCF compression in Silverlight is to enable Dynamic Compression in IIS7 and write no code at all.

    Read the article

  • Trouble implementing Singleton pattern in Tomcat web application due to Class Loader

    - by jwegan
    I'm trying to implement a Singleton in Tomcat 6.24 on Linux with x86_64 OpenJDK 1.6. My application is just a bunch of JSPs and some static content and the JSPs make calls to my Java code. Currently the web.xml just looks like this: <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="2.5"> <description> App Name </description> <display-name>App Name</display-name> <!-- The Usual Welcome File List --> <welcome-file-list> <welcome-file>pages/index.jsp</welcome-file> </welcome-file-list> </web-app> Before when I was trying to load my Singleton it was getting instantiated twice since the class was getting loaded by two different class loaders (I'm not sure why) and each loader would create an instance of the singleton which is not acceptable for my application. I finally figured out if I exported my code as a jar and put it in $CATALINA_HOME/lib then there was only one instance, but this is not an elegant solution. I've been googling for hours, but I haven't come up with anything yet. I'm wondering if there is some other solution. Currently I'm not precompling my JSPs, could this be part of the problem? Could I write a servlet to ensure the singleton is created? If so how do I do that?

    Read the article

  • Git + GitHub + Heroku

    - by Haseeb Khan
    Hi All, I am new to the world of Git, GitHub and Heroku. So far, I am enjoying this paradigm but coming from a background with SVN, things seems a bit complicated to me in the world of Git. I am facing a problem for which I am looking for a solution. Scenario: I have setup a new private project on GitHub. I forked the private project and now I have the following structure in my branch: /project /apps /my-apps /my-app-1 .... /my-app-2 .... /your-apps /your-app-1 .... /your-app-2 .... /plugins .... I can commit the code in my Fork on GitHub from my machine in any of the folders I want. Later on, these would be pulled into the master repository by the admin of the project. For every individual application in the apps folder, I have setup an app on Heroku which is a Git Repo in itself where I push my changes when I am done with the user stories from my local machine. In short, every app in the apps folder is a Rails App hosted on Heroku. Problem: What I want is that when I push my changes into Heroku, they can be committed into my project fork on GitHub as well, so, it also has the latest code all the time. The issue I see is that the code on Heroku is a Git Repo while the folders which I have on GitHub are part of a Repo. So far, what I have researched is that there is something known as Submodule in the Git World which can come to the rescue, however, I have not been able to find some newbie instructions. Can someone in the community be kind enough to share thoughts and help me to identify the solution of this problem? Thanks in advance. Regards, Haseeb Khan haseeb [AT] tkxel.com TkXel

    Read the article

  • Best way to dynamically get column names from oracle tables

    - by MNC
    Hi, We are using an extractor application that will export data from the database to csv files. Based on some condition variable it extracts data from different tables, and for some conditions we have to use UNION ALL as the data has to be extracted from more than one table. So to satisfy the UNION ALL condition we are using nulls to match the number of columns. Right now all the queries in the system are pre-built based on the condition variable. The problem is whenever there is change in the table projection (i.e new column added, existing column modified, column dropped) we have to manually change the code in the application. Can you please give some suggestions how to extract the column names dynamically so that any changes in the table structure do not require change in the code? My concern is the condition that decides which table to query. The variable condition is like if the condition is A, then load from TableX if the condition is B then load from TableA and TableY. We must know from which table we need to get data. Once we know the table it is straightforward to query the column names from the data dictionary. But there is one more condition, which is that some columns need to be excluded, and these columns are different for each table. I am trying to solve the problem only for dynamically generating the list columns. But my manager told me to make solution on the conceptual level rather than just fixing. This is a very big system with providers and consumers constantly loading and consuming data. So he wanted solution that can be general. So what is the best way for storing condition, tablename, excluded columns? One way is storing in database. Are there any other ways? If yes what is the best? As I have to give at least a couple of ideas before finalizing. Thanks,

    Read the article

  • IIS URL Rewrite rule - Default document for subdirectories

    - by Antonio Bakula
    I would like create URL rewrite rule that will set default document for my virtual folders. eg. someting like this www.domain.com/en/ -> www.domain.com/en/index.aspx www.domain.com/hr/ -> www.domain.com/hr/index.aspx www.domain.com/de/ -> www.domain.com/de/index.aspx directories en, hr, de doesn't really exists on web server they are just markers for languange used in site used by home grown http module that will rewrite path with query params. Quick solution was define rule for every single lang, something like this : <rewrite> <rewriteMaps> <rewriteMap name="Langs"> <add key="/en" value="/en/index.aspx" /> <add key="/hr" value="/hr/index.aspx" /> <add key="/de" value="/de/index.aspx" /> </rewriteMap> </rewriteMaps> <rules> But I would really like solution that would not require changes in web.config and adding rewrite rule for every languange used on particular site. Thanks !

    Read the article

  • Finding character in String in Vector.

    - by SoulBeaver
    Judging from the title, I kinda did my program in a fairly complicated way. BUT! I might as well ask anyway xD This is a simple program I did in response to question 3-3 of Accelerated C++, which is an awesome book in my opinion. I created a vector: vector<string> countEm; That accepts all valid strings. Therefore, I have a vector that contains elements of strings. Next, I created a function int toLowerWords( vector<string> &vec ) { for( int loop = 0; loop < vec.size(); loop++ ) transform( vec[loop].begin(), vec[loop].end(), vec[loop].begin(), ::tolower ); that splits the input into all lowercase characters for easier counting. So far, so good. I created a third and final function to actually count the words, and that's where I'm stuck. int counter( vector<string> &vec ) { for( int loop = 0; loop < vec.size(); loop++ ) for( int secLoop = 0; secLoop < vec[loop].size(); secLoop++ ) { if( vec[loop][secLoop] == ' ' ) That just looks ridiculous. Using a two-dimensional array to call on the characters of the vector until I find a space. Ridiculous. I don't believe that this is an elegant or even viable solution. If it was a viable solution, I would then backtrack from the space and copy all characters I've found in a separate vector and count those. My question then is. How can I dissect a vector of strings into separate words so that I can actually count them? I thought about using strchr, but it didn't give me any epiphanies.

    Read the article

  • Full-text search in C++

    - by Jen
    I have a database of many (though relatively short) HTML documents. I want users to be able to search this database by entering one or more search words in a C++ desktop application. Hence, I’m looking for a fast full-text search solution. Ideally, it should: Skip common words, such as the, of, and, etc. Support stemming, i.e. search for run also finds documents containing runner, running and ran. Be able to update its index in the background as new documents are added to the database. Be able to provide search word suggestions (like Google Suggest) To illustrate, assume the database has just two documents: Document 1: This is a test of text search. Document 2: Testing is fun. The following words should be in the index: fun, search, test, testing, text. If the user types t in the search box, I want the application to be able to suggest test, testing and text (Ideally, the application should be able to query the search engine for the 10 most common search words starting with t). A search for testing should return both documents. Can you suggest a C or C++ based solution? (I’ve briefly reviewed CLucene and Xapian, but I’m not sure if either will address my needs, especially querying the search word indexes for the suggest feature).

    Read the article

  • Store the cache data locally

    - by Lu Lu
    Hello, I develops a C# Winform application, it is a client and connect to web service to get data. The data returned by webservice is a DataTable. Client will display it on a DataGridView. My problem is that: Client will take more time to get all data from server (web service is not local with client). So I must to use a thread to get data. This is my model: Client create a thread to get data - thread complete and send event to client - client display data on datagridview on a form. However, when user closes the form, user can open this form in another time, and client must get data again. This solution will cause the client slowly. So, I think about a cached data: Client <---get/add/edit/delete--- Cached Data ---get/add/edit/delete---Server (web service) Please give me some suggestions. Example: cached data should be developed in another application which is same host with client? Or cached data is running in client. Please give me some techniques to implement this solution. If having any examples, please give me. Thanks.

    Read the article

  • Ternary operator

    - by Antoine Leclair
    In PHP, I often use the ternary operator to add an attribute to an html element if it applies to the element in question. For example: <select name="blah"> <option value="1"<?= $blah == 1 ? ' selected="selected"' : '' ?>> One </option> <option value="2"<?= $blah == 2 ? ' selected="selected"' : '' ?>> Two </option> </select> I'm starting a project with Pylons using Mako for the templating. How can I achieve something similar? Right now, I see two possibilities that are not ideal. Solution 1: <select name="blah"> % if blah == 1: <option value="1" selected="selected">One</option> % else: <option value="1">One</option> % endif % if blah == 2: <option value="2" selected="selected">Two</option> % else: <option value="2">Two</option> % endif </select> Solution 2: <select name="blah"> <option value="1" % if blah == 1: selected="selected" % endif >One</option> <option value="2" % if blah == 2: selected="selected" % endif >Two</option> </select> In this particular case, the value is equal to the variable tested (value="1" = blah == 1), but I use the same pattern in other situations, like <?= isset($variable) ? ' value="$variable" : '' ?>. I am looking for a clean way to achieve this using Mako.

    Read the article

  • How should I progressively enhance this content with JavaScript?

    - by Sean Dunwoody
    The background to this problem is that I'm doing a computing project which involves Some drop down boxes for input, and a text input where the user can input a date. I've used YUI to enhance the form, so the calendar input uses the YUI calendar widget and the drop down list is converted into a horizontal list of inputs where the user only has to click once to select any input as opposed to two clicks with the drop down box (hope that makes sense, not sure how to explain it clearly) The problem is, that in the design section of my project I stated that I would follow the progressive enhancement principles. I am struggling however to ensure that users without JavaScript are able to view the drop down box / text input on said page. This is not because I do not necessarily know how, but the two methods I've tried seem unsatisfactory. Method 1 - I tried using YUI to hide the text box and drop down list, this seemed like the ideal solution however there was quite a noticeable lag when loading the page (especially for the first time), the text box and drop down list where visibile for at least a second. I included the script for this just before the end of the body tag, is there any way that I can run it onload with YUI? Would that help? Method 2 - Use the noscript tag . . . however I am loathed to do this as while it would be a simple solution, I have read various bad things about the noscript tag. Is there a way of making mehtod one work? Or is there a better way of doing this that I am yet to encounter?

    Read the article

  • How to generate a number in arbitrary range using random()={0..1} preserving uniformness and density?

    - by psihodelia
    Generate a random number in range [x..y] where x and y are any arbitrary floating point numbers. Use function random(), which returns a random floating point number in range [0..1] from P uniformly distributed numbers (call it "density"). Uniform distribution must be preserved and P must be scaled as well. I think, there is no easy solution for such problem. To simplify it a bit, I ask you how to generate a number in interval [-0.5 .. 0.5], then in [0 .. 2], then in [-2 .. 0], preserving uniformness and density? Thus, for [0 .. 2] it must generate a random number from P*2 uniformly distributed numbers. The obvious simple solution random() * (x - y) + y will generate not all possible numbers because of the lower density for all abs(x-y)>1.0 cases. Many possible values will be missed. Remember, that random() returns only a number from P possible numbers. Then, if you multiply such number by Q, it will give you only one of P possible values, scaled by Q, but you have to scale density P by Q as well.

    Read the article

  • WF -- how do I use a custom activity without creating it in a separate Workflow Activity Library?

    - by Kevin Craft
    I am trying to accomplish something that seems like it should be very simple. I have a State Machine Workflow Console Application with a workflow in it. I have created a custom activity for it. This activity will NEVER be used ANYWHERE ELSE. I just want to use this activity on my workflow, but: It does not appear in the toolbox. I cannot drag it from the Solution Explorer onto the workflow designer. I absolutely do not want to create a separate State Machine Workflow Activity Library, since that will just clutter my solution. Like I said, I will never use this activity in any other project, so I would like to keep it confined to this one...but I just can't figure out how to get it onto the designer! Am I going crazy!? Here is the code for the activity: public partial class GameSearchActivity: Activity { public GameSearchActivity() { InitializeComponent(); } public static DependencyProperty QueryProperty = System.Workflow.ComponentModel.DependencyProperty.Register("Query", typeof(string), typeof(GameSearchActivity)); [Description("Query")] [Category("Dependency Properties")] [Browsable(true)] [DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)] public string Query { get { return ((string)(base.GetValue(GameSearchActivity.QueryProperty))); } set { base.SetValue(GameSearchActivity.QueryProperty, value); } } public static DependencyProperty ResultsProperty = System.Workflow.ComponentModel.DependencyProperty.Register("Results", typeof(string), typeof(GameSearchActivity)); [Description("Results")] [Category("Dependency Properties")] [Browsable(true)] [DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)] public IEnumerable<Game_GamePlatform> Results { get { return ((IEnumerable<Game_GamePlatform>)(base.GetValue(GameSearchActivity.ResultsProperty))); } set { base.SetValue(GameSearchActivity.ResultsProperty, value); } } protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext) { IDataService ds = executionContext.GetService<IDataService>(); Results = ds.SearchGames(Query); return ActivityExecutionStatus.Closed; } } Thanks. EDIT: OK, so I've discovered that if I change the project type from Console Application to Class Library, the custom activity appears in the toolbox. However, this is not acceptable. It needs to be a Console/Windows Application. Anyone know a way around this?

    Read the article

  • Parsing HTTP - Bytes.length != String.length

    - by hotzen
    Hello, I consume HTTP via nio.SocketChannel, so I get chunks of data as Array[Byte]. I want to put these chunks into a parser and continue parsing after each chunk has been put. HTTP itself seems to use an ISO8859-Charset but the Payload/Body itself may be arbitrarily encoded: If the HTTP Content-Length specifies X bytes, the UTF8-decoded Body may have much less Characters (1 Character may be represented in UTF8 by 2 bytes, etc). So what is a good parsing strategy to honor an explicitly specified Content-Length and/or a Transfer-Encoding: Chunked which specifies a chunk-length to be honored. append each data-chunk to an mutable.ArrayBuffer[Byte], search for CRLF in the bytes, decode everything from 0 until CRLF to String and match with Regular-Expressions like StatusRegex, HeaderRegex, etc? decode each data-chunk with the proper charset (e.g. iso8859, utf8, etc) and add to StringBuilder. With this solution I am not able to honor any Content-Length or Chunk-Size, but.. do I have to care for it? any other solution... ?

    Read the article

  • Problem: How to display a Wordpress RSS feed in a browser that doesn't have a built in RSS reader?

    - by StephenMeehan
    If I can, i'd rather not use a service like FeedBurner. My setup: I've setup a RSS feed link on a self-hosted Wordpress website, clicking the RSS link in Safari shows the feed - because Safari has a built in RSS reader. Great. Unfortunately clicking the same RSS link in Chrome displays the raw XML feed. I know why this happens - Chrome doesn't have a built in RSS reader. I also assume this will be the same in older versions of Internet Explorer. Possible solution? I've noticed http://www.bbc.co.uk/news has a nice solution: Click the RSS feed (top tight of the page) in a RSS enabled browser (Safari) and it uses the built in RSS reader to display the RSS feed. Click the same RSS feed link in Chrome (Chrome has no built in RSS reader) it displays the RSS feed using what looks like a custom page. Is there a way to check if a browser has a built in RSS reader? How would I provide alternative content (like the BBC site) to a browser that doesn't have a RSS reader installed? Any help on this would be brilliant, thanks for taking the time to read this. Stephen

    Read the article

  • Interactive World Map, highlight countries on mouseover

    - by BrenGG
    I need to create an interactive world map on the front page of a site, the view portal will be about 650x200 pixels. The interactivity would include the following, mouse-over a country would highlight (the countries are will literally be filled with "red" for example) that country and display the countries' name (preferably text in a div), I will also be linking the highlighting event with a that will highlight a country when selected. I am having a difficult time finding a suitable solution, I refuse to use or learn a proprietry technology such as flash so it is not an option. I created a simple mockup using openlayers and a custom map image but the countries' markers load too slowly in IE6. Also svg seems too large, as I tried to use RaphaelJS, but abondoned it when I realised the world map data is 1.2mb which is totally un acceptable for the front page of a site.. I am really at a loss on how I am going to do this, my last resort is to manually create 250+ (however many countries there are) pngs and apply mouseover events to hotspots in the image... but this is probably going to be a dead end too.. desperately seeking a solution, any helpful comments will be appreciated!

    Read the article

  • OpenGL ES - how to keep some object at a fixed size?

    - by OMH
    I'm working on a little game in OpenGL ES. In the background, there is a world/map. The map is just a large texture. Zoom/pinch/pan is used to move around. And I'm using glOrthof (left, right, bottom, top, zNear, zFar) to implement the zoom/pinch. When I zoom in, the sprites on top of the map is also zoomed in. But I would like to have some sprites stay at a fixed size. I could probably calculate a scale factor, depending on the parameters to glOrthof, but there must be a more natural and straightforward way of doing that, instead of scaling the sprites down when I zoom in. If I add some text or some GUI elements on top of the map, they should definately have a fixed size. Is there a solution to do this, or do I have to leave fixed values in glOrthof and implement zoom/pinch in another way? EDIT: To be more clear: I want sprites that zoom in/out along with the map, but stay at the same size. I have some elements that are like the pins on the iPhone's map application. When you zoom, the pins stay the same size, but move around on the screen to stay on the same spot on the map. That is mainly what I want a solution for. Solutions for this already came below, thanks!

    Read the article

  • multi-shop orders table and sequential order numbers based on shop

    - by imanc
    Hey, I am looking at building a shop solution that needs to be scalable. Currently it retrieves 1-2000 orders on average per day across multiple country based shops (e.g. uk, us, de, dk, es etc.) but this order could be 10x this amount in two years. I am looking at either using separate country-shop databases to store the orders tables, or looking to combine all into one order table. If all orders exist in one table with a global ID (auto num) and country ID (e.g uk,de,dk etc.), each countries orders would also need to have sequential ordering. So in essence, we'd have to have a global ID and a country order ID, with the country order ID being sequential for countries only, e.g. global ID = 1000, country = UK, country order ID = 1000 global ID = 1001, country = DE, country order ID = 1000 global ID = 1002, country = DE, country order ID = 1001 global ID = 1003, country = DE, country order ID = 1002 global ID = 1004, country = UK, country order ID = 1001 THe global ID would be DB generated and not something I would need to worry about. But I am thinking that I'd have to do a query to get the current country order based ID+1 to find the next sequential number. Two things concern me about this: 1) query times when the table has potentially millions of rows of data and I'm doing a read before a write, 2) the potential for ID number clashes due to simultaneous writes/reads. With a MyISAM table the entire table could be locked whilst the last country order + 1 is retrieved, to prevent ID number clashes. I am wondering if anyone knows of a more elegant solution? Cheers, imanc

    Read the article

< Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >