Search Results

Search found 16379 results on 656 pages for 'long pham'.

Page 140/656 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • How to Increase the time till a read timeout error occurs?

    - by Alex
    Hi all. I've written in PHP a script that takes a long time to execute [Image processing for thousands of pictures]. It's a meter of hours - maybe 5. After 15 minutes of processing, I get the error: ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: The URL which I clicked Read Timeout The system returned: [No Error] A Timeout occurred while waiting to read data from the network. The network or server may be down or congested. Please retry your request. Your cache administrator is webmaster. What I need is to enable that script to run for much longer. Now, here are all the technical info: I'm writing in PHP and using the Zend Framework. I'm using Firefox. The long script that is processed is done after clicking a link. Obviously, since the script is not over I see the web page on which the link was and the web browser writes "waiting for ...". After 15 minutes the error occurs. I tried to make changes to Firefox threw about:config but without any success. I don't know, but the changes might be needed somewhere else. So, any ideas? Thanks ahead.

    Read the article

  • "jpeglib.h: No such file or directory" ghostscript port in OPENBSD

    - by holms
    Hello I have a problem with compiling a ghostscript from ports in openbsd 4.7. SO i have jpeg-7 installed, I have latest port tree for obsd4.7. ===> Building for ghostscript-8.63p11 mkdir -p /usr/ports/pobj/ghostscript-8.63p11/ghostscript-8.63/obj gmake LDFLAGS='-L/usr/local/lib -shared' GS_XE=./obj/../obj/libgs.so.11.0 STDIO_IMPLEMENTATION=c DISPLAY_DEV=./obj/../obj/display.dev BINDIR=./obj/../obj GLGENDIR=./obj/../obj GLOBJDIR=./obj/../obj PSGENDIR=./obj/../obj PSOBJDIR=./obj/../obj CFLAGS='-O2 -fno-reorder-blocks -fno-reorder-functions -fomit-frame-pointer -march=i386 -fPIC -Wall -Wstrict-prototypes -Wmissing-declarations -Wmissing-prototypes -fno-builtin -fno-common -DGS_DEVS_SHARED -DGS_DEVS_SHARED_DIR=\"/usr/local/lib/ghostscript/8.63\"' prefix=/usr/local ./obj/../obj/gsc gmake[1]: Entering directory `/usr/ports/pobj/ghostscript-8.63p11/ghostscript-8.63' cc -I./obj/../obj -I./src -DHAVE_MKSTEMP -O2 -fno-reorder-blocks -fno-reorder-functions -fomit-frame-pointer -march=i386 -fPIC -Wall -Wstrict-prototypes -Wmissing-declarations -Wmissing-prototypes -fno-builtin -fno-common -DGS_DEVS_SHARED -DGS_DEVS_SHARED_DIR=\"/usr/local/lib/ghostscript/8.63\" -DGX_COLOR_INDEX_TYPE='unsigned long long' -o ./obj/../obj/sdctc.o -c ./src/sdctc.c In file included from src/sdctc.c:17: obj/jpeglib_.h:1:21: jpeglib.h: No such file or directory In file included from src/sdctc.c:19: src/sdct.h:58: error: field `err' has incomplete type src/sdct.h:70: error: field `err' has incomplete type src/sdct.h:72: error: field `cinfo' has incomplete type src/sdct.h:73: error: field `destination' has incomplete type src/sdct.h:84: error: field `err' has incomplete type src/sdct.h:87: error: field `dinfo' has incomplete type src/sdct.h:88: error: field `source' has incomplete type gmake[1]: *** [obj/../obj/sdctc.o] Error 1 gmake[1]: Leaving directory `/usr/ports/pobj/ghostscript-8.63p11/ghostscript-8.63' gmake: *** [so] Error 2 *** Error code 2 Stop in /usr/ports/print/ghostscript/gnu (line 2225 of /usr/ports/infrastructure/mk/bsd.port.mk). I tried to place one more param in CFLAGS in Makefile with value "-I/usr/local" but no luck =( People in irc [freenode server, #openbsd channel] refuses give any help for ports at all, and even more - because this is 4.7 unstable version. I have my reasons to use this version and ports believe me =) CFLAGS+= -DSYS_TYPES_HAS_STDINT_TYPES \ -I${LOCALBASE}/include \ -I${LOCALBASE}/include/ijs \ -I${LOCALBASE}/include/libpng \

    Read the article

  • which xml validator will work perfectly for multithreading project

    - by Sunil Kumar Sahoo
    Hi All, I have used jdom for xml validation against schema. The main problem there is that it gives an error FWK005 parse may not be called while parsing The main reason was that multiple of threads working for xerces validation at the same time. SO I got the solution that i have to lock that validation. which is not good So I want to know which xml validator works perfectly for multithreading project public static HashMap validate(String xmlString, Validator validator) { HashMap<String, String> map = new HashMap<String, String>(); long t1 = System.currentTimeMillis(); DocumentBuilder builder = null; try { //obtain lock to proceed // lock.lock(); try { builder = DocumentBuilderFactory.newInstance().newDocumentBuilder(); // Source source = new DOMSource(builder.parse(new ByteArrayInputStream(xmlString.getBytes()))); validator.validate(new StreamSource(new StringReader(xmlString))); map.put("ISVALID", "TRUE"); logger.info("We have successfuly validated the schema"); } catch (Exception ioe) { ioe.printStackTrace(); logger.error("NOT2 VALID STRING IS :" + xmlString); map.put("MSG", ioe.getMessage()); // logger.error("IOException while validating the input XML", ioe); } logger.info(map); long t2 = System.currentTimeMillis(); logger.info("XML VALIDATION TOOK:::" + (t2 - t1)); } catch (Exception e) { logger.error(e); } finally { //release lock // lock.unlock(); builder = null; } return map; } Thanks Sunil Kumar Sahoo

    Read the article

  • Extracting bool from istream in a templated function

    - by Thomas Matthews
    I'm converting my fields class read functions into one template function. I have field classes for int, unsigned int, long, and unsigned long. These all use the same method for extracting a value from an istringstream (only the types change): template <typename Value_Type> Value_Type Extract_Value(const std::string& input_string) { std::istringstream m_string_stream; m_string_stream.str(input_string); m_string_stream.clear(); m_string_stream >> value; return; } The tricky part is with the bool (Boolean) type. There are many textual representations for Boolean: 0, 1, T, F, TRUE, FALSE, and all the case insensitive combinations Here's the questions: What does the C++ standard say are valid data to extract a bool, using the stream extraction operator? Since Boolean can be represented by text, does this involve locales? Is this platform dependent? I would like to simplify my code by not writing my own handler for bool input. I am using MS Visual Studio 2008 (version 9), C++, and Windows XP and Vista.

    Read the article

  • jQuery Autocomplete plugin (Jorn Zaefferer's) - how to dynamically change the list of displayed valu

    - by Max Williams
    I'm using Jorn Zaefferer's Autocomplete query plugin, http://bassistance.de/jquery-plugins/jquery-plugin-autocomplete/ I have options set so it shows all the values when you click in the empty text field, a bit like a select, and the option is also set so that the user can only choose from the list of values used by the autocomplete (so it's kind of like a select but with autocomplete functionality). I have two radio buttons below the text field, which determine whether the user chooses from a long list or a short list of possible values. I want to update the values used in the autocomplete when one of these radio buttons is clicked. Currently i'm doing this in a not very clever way by calling autocomplete again on the same text field, with the different array of values, but this creates a situation where both are active at once, and i can see the long list peeking out from behind the short list. What i need to do is either a) dynamically change the values used in the autocomplete or b) remove (unbind?) the autocomplete from the text field before re-initialising it. Either of these would do tbh though option a) is kind of nicer. Any ideas anyone? Here's my current code: function initSubjectLongShortList(field, short_values, long_values){ $(".subject_short_long_list").change(function(){ updateSubjectAutocomplete(field, short_values, long_values); }); updateSubjectAutocomplete(field, short_values, long_values); } function updateSubjectAutocomplete(field, short_values, long_values){ if($(".subject_short_long_list:checked").attr('id') == "subject_long_list"){ initSubjectAutocomplete(field, long_values); } else { initSubjectAutocomplete(field, short_values); } } function initSubjectAutocomplete(field, values){ jQuery(field).autocomplete(values, { minChars: 0, //make it appear as soon as we click in the field max: 2000, scrollHeight: 400, matchContains: true, selectFirst: false }); } cheers, max

    Read the article

  • Asynchronously populate datagridview in Windows Forms application

    - by dcryptd
    howzit! I'm a web developer that has been recently requested to develop a Windows forms application, so please bear with me (or dont laugh!) if my question is a bit elementary. After many sessions with my client, we eventually decided on an interface that contains a tabcontrol with 5 tabs. Each tab has a datagridview that may eventually hold up to 25,000 rows of data (with about 6 columns each). I have successfully managed to bind the grids when the tab page is loaded and it works fine for a few records, but the UI freezes when I bound the grid with 20,000 dummy records. The "freeze" occurs when I click on the tab itself, and the UI only frees up (and the tab page is rendered) once the bind is complete. I communicated this to the client and mentioned the option of paging for each grid, but she is adament w.r.t. NOT wanting this. My only option then is to look for some asynchronous method of doing this in the background. I don't know much about threading in windows forms, but I know that I can use the BackgroundWorker control to achieve this. My only issue after reading up a bit on it is that it is ideally used for "long-running" tasks and I/O operations. My questions: How does one determine a long-running task? How does one NOT MISUSE the BackgroundWorker control, ie. is there a general guideline to follow when using this? (I understand that opening/spawning multiple threads may be undesirable in certain instances) Most importantly: How can I achieve (asychronously) binding of the datagridview after the tab page - and all its child controls - loads. Thank you for reading this (ahem) lengthy query, and I highly appreciate any responses/thoughts/directions on this matter! Cheers!

    Read the article

  • Dynamic controls not creating with RSS reader function

    - by TuxMeister
    Hello, I am working on a test project for an RSS reader. I am using Chilkat's module for .NET 3.5. What I am trying to do is this: For each item in the RSS feed, I want to dynamically create some controls (labels) that contain stuff like the title, the link and the publication date. The problem is that only the first control comes up "rssTitle", but not the rest and it's definitely not creating the rest, nor cycling through the RSS items. Any ideas where I'm wrong in my code? Imports Chilkat Public Class Form1 Dim rss As New Chilkat.Rss Dim success As Boolean Dim rssTitle As New Label Dim rssLink As New Label Dim rssPubDate As New Label Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click success = rss.DownloadRss("http://www.engadget.com/rss.xml") If success <> True Then MessageBox.Show(rss.LastErrorText) Exit Sub End If Dim rssChannel As Chilkat.Rss rssChannel = rss.GetChannel(0) If rssChannel Is Nothing Then MessageBox.Show("No channel found.") Exit Sub End If Dim numItems As Long numItems = rssChannel.NumItems Dim i As Long For i = 0 To numItems - 1 Dim rssItem As Chilkat.Rss rssItem = rssChannel.GetItem(i) Me.Controls.Add(rssTitle) With rssTitle .Name = "rssTitle" & Me.Controls.Count.ToString + 1 .Text = "Title: " & rssItem.GetString("title") .Left = 12 .Top = 12 End With Me.Controls.Add(rssLink) With rssLink .Name = "rssLink" & Me.Controls.Count.ToString + 1 .Text = "Link: " & rssItem.GetString("link") .Left = 12 .Top = 12 End With Me.Controls.Add(rssPubDate) With rssPubDate .Name = "rssPubDate" & Me.Controls.Count.ToString + 1 .Text = "Pub date: " & rssItem.GetString("pubDate") .Left = 12 .Top = 12 End With Next End Sub End Class I'm grateful for any help. Thanks!

    Read the article

  • How do you Access an Authenticated Google App Engine Service with Ruby?

    - by viatropos
    I am trying to do this same thing here but with Ruby: Access Authenticated GAE Client with Python. Any ideas how to retrieve authenticated content from GAE with Ruby? I am using the Ruby GData Gem to access everything in Google Docs and such and it's making life very easy, but now I'd like to access things on GAE that require admin access, programmatically, and it doesn't support that. Here's what I'm getting (using DocList, not sure what to use yet): c = GData::Client::DocList.new c.clientlogin(username, password, nil, nil, nil, "HOSTED") c => #<GData::Client::DocList:0x201bad8 @clientlogin_service="writely", @version="2", @auth_handler=#<GData::Auth::ClientLogin:0x200803c @account_type="HOSTED", @token="long-hash", @auth_url="https://www.google.com/accounts/ClientLogin", @service="writely">, @source="AnonymousApp", @headers={"Authorization"=>"GoogleLogin auth=long-hash", "User-Agent"=>"GoogleDataRubyUtil-AnonymousApp", "GData-Version"=>"2", "Content-Type"=>"application/atom+xml"}, @authsub_scope="http://docs.google.com/feeds/", @http_service=GData::HTTP::DefaultService> url = "http://my-cdn.appspot.com/files/restricted-file.html" c.get(url) => #<GData::HTTP::Response:0x20004b8 @status_code=302, @body="", @headers={"connection"=>"close", "date"=>"Sun, 11 Apr 2010 00:30:20 GMT", "content-type"=>"text/html", "server"=>"Google Frontend", "content-length"=>"0", "location"=>"https://www.google.com/accounts/ServiceLogin service=ah&continue=http://my-cdn.appspot.com/_ah/login%3Fcontinue%3D http://my-cdn.appspot.com/files/restricted-file.html& ltmpl=gm&ahname=My+CDN&sig=a-signature"}> Any tips? That other SO question pointed to doing something with the redirect... Not sure how to handle that. Just looking for a point in the right direction from the ruby experts. Thanks.

    Read the article

  • Refactoring Bloated ViewModel

    - by Holy Christ
    Hi, I am writing a PRISM/MVVM/WPF application. It's a LOB application, so there are a lot of complicated rules. I've noticed the View Model is starting to get bloated. There are two main issues. One is that to maintain MVVM, I'm doing a lot of things that feel hacky like adding a bunch of properties to my VM. The view binds to those properties to keep track of what feels like view specific information. For example, a boolean keeping track of the status of a long running process in the VM, so the view can disable some of its controls while the long running process is working. I've read that this issue could be solved with Attached Behaviors. I'll look more into that. In the example MVVM apps you see online, this isn't a big deal because they are over-simplified. The other issue is the number of commands in my VM. Right now there are four commands. I'm defining the commands in the VM using Josh Smith's RelayCommand (basically the DelegateCommand in PRISM) so all the business logic lives in the VM. I considered moving each command into separate unit of works. I'm not sure the best way to do this. Which patterns are you guys using to keep your VMs clean? I can already feel someone responding with "your view and VM is too complicated, you should break them into many view/VMs". It is certainly not too complicated from a Ux perspective - there are 2 buttons, a combobox, and a listbox. Also, from a logical perspective, it is one cohesive domain. Having said that, I'm very interested in hearing how others are dealing with this type of issue. Thanks for your input.

    Read the article

  • Reading from serial port with Boost Asio?

    - by trikri
    Hi! I'm going to check for incoming messages (data packages) on the serial port, using Boost Asio. Each message will start with a header that is one byte long, and will specify which type of the message has been sent. Each different type of message has an own length. The function I'm about to write should check for new incoming messages continually, and when it finds one it should read it, and then some other function should parse it. I thought that the code might look something like this: void check_for_incoming_messages() { boost::asio::streambuf response; boost::system::error_code error; std::string s1, s2; if (boost::asio::read(port, response, boost::asio::transfer_at_least(0), error)) { s1 = streambuf_to_string(response); int msg_code = s1[0]; if (msg_code < 0 || msg_code >= NUM_MESSAGES) { // Handle error, invalid message header } if (boost::asio::read(port, response, boost::asio::transfer_at_least(message_lengths[msg_code]-s1.length()), error)) { s2 = streambuf_to_string(response); // Handle the content of s1 and s2 } else if (error != boost::asio::error::eof) { throw boost::system::system_error(error); } } else if (error != boost::asio::error::eof) { throw boost::system::system_error(error); } } Is boost::asio::streambuf is the right thing to use? And how do I extract the data from it so I can parse the message? I also want to know if I need to have a separate thread which only calls this function, so that it get called more often? Isn't there a risk for loosing data in between two calls to the function otherwise, because so much data comes in that it can't be stored in the serial ports memory? I'm using Qt as a widget toolkit and I don't really know how long time it needs to process all it's events.

    Read the article

  • Multimedia files written over WAN are getting truncated

    - by Dean
    I use the windows Multimedia API to create .wav files. 1. Open file with mmsioOpen 2. Creates WAVE,frm and data chunks using mmioCreateChunk 3. Write audio data using mmioWrite 4. Ascend out of the chunks using mmioAscend 5. Close file using mmioClose The file is being written into a temporary location, so after it has been closed it gets copied to another location using the CopyFile. This program is written in C++ and works great until the file it is writing resides over a WAN in a different city or country. The end result is a wav file that should be 20-30 seconds long ends up being 4 secodns long. It is always the last bit that is missing, so when you play it back it just stops before then of the recording. I initially thought that maybe I was copying the file too soon so as a test I put in a pause of 30 seconds after closing the file using Sleep(30000), but this made no difference to either it being truncated or by how much. I have modified the program to write to a file in parrallel using CreateFile and WriteFile, and the result is the same, so it is not an issue specifically with the mmio API's. Does anyone have any ideas why this is happening and if there is a work-around to it? I suspect that I may end up having the temporary location on the local drive, but this is quite a big change to the application as well as existing deployments. thanks for everyones time Dean

    Read the article

  • How to manage a One-To-One and a One-To-Many of same type as unidirectional mapping?

    - by user1652438
    I'm trying to implement a model for private messages between two or more users. That means I've got two Entities: User PrivateMessage The User model shouldn't be edited, so I'm trying to set up an unidirectional relationship: @Entity (name = "User") @Table (name = "user") public class User implements Serializable { @Id String username; String password; // something like this ... } The PrivateMessage Model addresses multiple receivers and has exactly one sender. So I need something like this: @Entity (name = "PrivateMessage") @Table (name = "privateMessage") @XmlRootElement @XmlType (propOrder = {"id", "sender", "receivers", "title", "text", "date", "read"}) public class PrivateMessage implements Serializable { private static final long serialVersionUID = -9126766942868177246L; @Id @GeneratedValue private Long id; @NotNull private String title; @NotNull private String text; @NotNull @Temporal(TemporalType.TIMESTAMP) private Date date; @NotNull private boolean read; @NotNull @ElementCollection(fetch = FetchType.EAGER, targetClass = User.class) private Set<User> receivers; @NotNull @OneToOne private User sender; // and so on } The according 'privateMessage' table won't be generated and just the relationship between the PM and the many receivers is satisfied. I'm confused about this. Everytime I try to set a 'mappedBy' attribute, my IDE marks it as an error. It seems to be a problem that the User-entity isn't aware of the private message which maps it. What am I doing wrong here? I've solved some situation similar to this one, but none of those solutions will work here. Thanks in advance!

    Read the article

  • how can I add a custom non-DataTable column to my DataView, in a winforms ADO.net application?

    - by Greg
    Hi, How could I (/is it possible) to add a custom column to my DataView, and in this column display the result of a specific calculation. That is, I currently have a dataGridView which has a binding to a DataView, based on the DataTable from my database. I'd like to add an additional column to the dataGridView to display a number which is calculated by looking at this current row plus it's children row. In other words the info for the column isn't just derivable from the row data itself. Specific questions might be: a) where to add the column itself? to the DataView I assume? b) which method / event to trigger the re-calculation of the value of this custom column from ( / how do I control this) Thanks PS. I've also noted if I use the following code/approach I get a infinite loop... // Custom Items DataColumn dc = new DataColumn("OverallSize", typeof(long)); DT_Webfiles.Columns.Add(dc); DT_Webfiles.RowChanged += new DataRowChangeEventHandler(DT_Row_Changed); private static void DT_Row_Changed(object sender, DataRowChangeEventArgs e) { e.Row["OverallSize"] = e.Row["OverallSize"] ?? 0; e.Row["OverallSize"] = (long)e.Row["OverallSize"] + 1; } What other approach could avoid this looping. i.e. currently I'm saying update the value of the custom column when the row changes, however after then changing the row it triggers antoher 'row has changed' event...

    Read the article

  • Insert video clip in a lyx presentation and play it in GNU/Linux.

    - by Orjanp
    How can I insert a video clip into a presentation created in Lyx? Have seen http://www.latex-community.org/forum/viewtopic.php?f=19&t=48. It works, but there the video starts in the background in an external player. I would prefer it to be played in the presentation itself. If an external player is used it it should at least start in the foreground. But the presentation takes the foreground. Using evince in GNU/linux as pdf viewer. Beamer is used as a presentation template. Is it possible to play a video file in an embedded player in the presentation itself? Created an example presentation. The code is found below. \documentclass[english]{beamer} \usepackage{mathptmx} \usepackage[T1]{fontenc} \usepackage[latin9]{inputenc} \usepackage{amsmath} \usepackage{amssymb} \makeatletter %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Textclass specific LaTeX commands. % this default might be overridden by plain title style \newcommand\makebeamertitle{\frame{\maketitle}}% \AtBeginDocument{ \let\origtableofcontents=\tableofcontents \def\tableofcontents{\@ifnextchar[{\origtableofcontents}{\gobbletableofcontents}} \def\gobbletableofcontents#1{\origtableofcontents} } \makeatletter \long\def\lyxframe#1{\@lyxframe#1\@lyxframestop}% \def\@lyxframe{\@ifnextchar<{\@@lyxframe}{\@@lyxframe<*>}}% \def\@@lyxframe<#1>{\@ifnextchar[{\@@@lyxframe<#1>}{\@@@lyxframe<#1>[]}} \def\@@@lyxframe<#1>[{\@ifnextchar<{\@@@@@lyxframe<#1>[}{\@@@@lyxframe<#1>[<*>][}} \def\@@@@@lyxframe<#1>[#2]{\@ifnextchar[{\@@@@lyxframe<#1>[#2]}{\@@@@lyxframe<#1>[#2][]}} \long\def\@@@@lyxframe<#1>[#2][#3]#4\@lyxframestop#5\lyxframeend{% \frame<#1>[#2][#3]{\frametitle{#4}#5}} \makeatother \def\lyxframeend{} % In case there is a superfluous frame end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% User specified LaTeX commands. \usetheme{Warsaw} \usepackage{hyperref} \makeatother \usepackage{babel} \begin{document} \title{Testing video} \makebeamertitle \lyxframeend{}\section{Testing video} \lyxframeend{}\subsection{Testing video} \lyxframeend{}\lyxframe{Testing video} \href{run:video.wmv}{Movie} \appendix \lyxframeend{} \end{document}

    Read the article

  • php refresh on 2nd page refresh

    - by cnotethegr8
    i have this function that gives me an output of a number. (the number is my total amount of downloads from my iphone themes.) because the code has to make so many requests, it loads the page very slowly. what would be the best way for me to go about the code loading into a variable and than calling it on the second page refresh. so it dosnt take so long to load? or any other method will do. i just want it to not take so long to load! also this isnt on my server so i cant use $.ajax <?php function all_downloads() { $allThemes = array( 'com.modmyi.batterytheme', 'com.modmyi.connectiontheme', 'com.modmyi.icontheme', 'com.modmyi.percenttheme', 'com.modmyi.statusnotifiertheme', 'com.modmyi.cnote', 'com.modmyi.iaccescnotekb', 'com.modmyi.cnotelite', 'com.modmyi.multibrowsericon', 'com.modmyi.changeappstoreiconwithinstallous' ); $total = 0; foreach($allThemes as $com_modmyi){ $theme = file_get_contents( "http://modmyi.com/cstats/index.php?package=".$com_modmyi.'&output=number'); $theme = str_replace(",","", $theme); $almost_done += $theme; $rock_your_phone = 301; //From c-note and Multi Lock Screen Theme on Rock Your Phone $total = ($almost_done + $rock_your_phone); } echo number_format($total); } ?>

    Read the article

  • Force full garbage collection when memory occupation goes beyond a certain threshold

    - by Silvio Donnini
    I have a server application that, in rare occasions, can allocate large chunks of memory. It's not a memory leak, as these chunks can be claimed back by the garbage collector by executing a full garbage collection. Normal garbage collection frees amounts of memory that are too small: it is not adequate in this context. The garbage collector executes these full GCs when it deems appropriate, namely when the memory footprint of the application nears the allotted maximum specified with -Xmx. That would be ok, if it wasn't for the fact that these problematic memory allocations come in bursts, and can cause OutOfMemoryErrors due to the fact that the jvm is not able to perform a GC quickly enough to free the required memory. If I manually call System.gc() beforehand, I can prevent this situation. Anyway, I'd prefer not having to monitor my jvm's memory allocation myself (or insert memory management into my application's logic); it would be nice if there was a way to run the virtual machine with a memory threshold, over which full GCs would be executed automatically, in order to release very early the memory I'm going to need. Long story short: I need a way (a command line option?) to configure the jvm in order to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold, I don't care if this slows my application down every once in a while. All I've found till now are ways to modify the size of the generations, but that's not what I need (at least not directly). I'd appreciate your suggestions, Silvio P.S. I'm working on a way to avoid large allocations, but it could require a long time and meanwhile my app needs a little stability

    Read the article

  • Add Hexidecimal Header Info to JPEG File Using Java

    - by jboyd
    I need to add header info to a JPEG file in order to get it to work properly when shared on some websites, I've tracked down the correct info through a lot of Hex digging, but now I'm kind of stuck trying to get it into the file. I know where in the file it needs to go, and I know how long it is, my problem is that RandomAccessFile just overwrites existing data in the file and FileOutputStream appends the data to the end. I don't want either, I want to INSERT data starting at the third byte. My example code: File fileToChange = new File("someimage.jpg"); byte[] i = new byte[2]; i[0] = (byte)Integer.decode("0xcc"); i[1] = (byte)Integer.decode("0xcc"); RandomAccessFile f = new RandomAccessFile(new File("videothing.jpg"), "rw"); long aPositionWhereIWantToGo = 2; f.seek(aPositionWhereIWantToGo); // this basically reads n bytes in the file f.write((byte[])i); f.close(); So this doesn't work because it overwrites, and does not insert, I can't find any way to just insert data into a file

    Read the article

  • Counting the number of objects that meets a certain criteria

    - by Candy Chiu
    The title doesn't tell the complete story. Please read the message. I have two objects: Adult and Child. Child has a boolean field isMale, and a reference to Adult. Adult doesn't reference Child. public class Adult { long id; } public class Child { long id; boolean isMale; Adult parent; } I want to create a query to list the number of sons each adult has including adults who don't have any sons. I tried: Query 1 SELECT adult, COUNT(child) FROM Child child RIGHT OUTER JOIN child.parent as adult WHERE child.isMale='true' GROUP BY adult which translates to sql select adult.id as col_0_0_, count(child.id) as col_1_0_, ... {omit properties} from Child child right outer join Adult adult on child.parentId=adult.id where child.isMale = 'true' group by adult.id Query 1 doesn't pick up adults that don't have any sons. Query 2: SELECT adult, COUNT(child.isMale) FROM Child child RIGHT OUTER JOIN child.parent as adult GROUP BY adult translates to sql: select adult.id as col_0_0_, count(child.id) as col_1_0_, ... {omit properties} from Child child right outer join Adult adult on child.parentId=adult.id group by adult.id Query 2 doesn't have the right count of sons. Basically COUNT doesn't evaluate isMale. The where clause in Query 1 filtered out Adults with no sons. How do I build a HQL or a Criteria query for this use case? Thanks.

    Read the article

  • How to delete a large cookie that causes Apache to 400

    - by jakemcgraw
    I've come across an issue where a web application has managed to create a cookie on the client, which, when submitted by the client to Apache, causes Apache to return the following: HTTP/1.1 400 Bad Request Date: Mon, 08 Mar 2010 21:21:21 GMT Server: Apache/2.2.3 (Red Hat) Content-Length: 7274 Connection: close Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Size of a request header field exceeds server limit.<br /> <pre> Cookie: ::: A REALLY LONG COOKIE ::: </pre> </p> <hr> <address>Apache/2.2.3 (Red Hat) Server at www.foobar.com Port 80</address> </body></html> After looking into the issue, it would appear that the web application has managed to create a really long cookie, over 7000 characters. Now, don't ask me how the web application was able to do this, I was under the impression browsers were supposed to prevent this from happening. I've managed to come up with a solution to prevent the cookies from growing out of control again. The issue I'm trying to tackle is how do I reset the large cookie on the client if every time the client tries to submit a request to Apache, Apache returns a 400 client error? I've tried using the ErrorDocument directive, but it appears that Apache bails on the request before reaching any custom error handling.

    Read the article

  • is delete p where p is a pointer to array a memory leak ?

    - by Eli
    following a discussion in a software meeting I setup to find out if deleting an dynamically allocated primitive array with plain delete will cause a memory leak. I have written this tiny program and compiled with visual studio 2008 running on windows XP: #include "stdafx.h" #include "Windows.h" const unsigned long BLOCK_SIZE = 1024*100000; int _tmain() { for (unsigned int i =0; i < 1024*1000; i++) { int* p = new int[1024*100000]; for (int j =0;j<BLOCK_SIZE;j++) p[j]= j % 2; Sleep(1000); delete p; } } I than monitored the memory consumption of my application using task manager, surprisingly the memory was allocated and freed correctly, allocated memory did not steadily increase as was expected I've modified my test program to allocate a non primitive type array : #include "stdafx.h" #include "Windows.h" struct aStruct { aStruct() : i(1), j(0) {} int i; char j; } NonePrimitive; const unsigned long BLOCK_SIZE = 1024*100000; int _tmain() { for (unsigned int i =0; i < 1024*100000; i++) { aStruct* p = new aStruct[1024*100000]; Sleep(1000); delete p; } } after running for for 10 minutes there was no meaningful increase in memory I compiled the project with warning level 4 and got no warnings. is it possible that the visual studio run time keep track of the allocated objects types so there is no different between delete and delete[] in that environment ?

    Read the article

  • Lua Alien Module - Trouble using WriteProcessMemory function, unsure on types (unit32)

    - by jefferysanders
    require "alien" --the address im trying to edit in the Mahjong game on Win7 local SCOREREF = 0x0744D554 --this should give me full access to the process local ACCESS = 0x001F0FFF --this is my process ID for my open window of Mahjong local PID = 1136 --function to open proc local op = alien.Kernel32.OpenProcess op:types{ ret = "pointer", abi = "stdcall"; "int", "int", "int"} --function to write to proc mem local wm = alien.Kernel32.WriteProcessMemory wm:types{ ret = "long", abi = "stdcall"; "pointer", "pointer", "pointer", "long", "pointer" } local pRef = op(ACCESS, true, PID) local buf = alien.buffer("99") -- ptr,uint32,byte arr (no idea what to make this),int, ptr print( wm( pRef, SCOREREF, buf, 4, nil)) --prints 1 if success, 0 if failed So that is my code. I am not even sure if I have the types set correctly. I am completely lost and need some guidance. I really wish there was more online help/documentation for alien, it confuses my poor brain. What utterly baffles me is that it WriteProcessMemory will sometimes complete successfully (though it does nothing at all, to my knowledge) and will also sometimes fail to complete successfully. As I've stated, my brain hurts. Any help appreciated.

    Read the article

  • Google App Engine: Unit testing concurrent access to memcache

    - by Phuong Nguyen de ManCity fan
    Would you guys show me a way to simulating concurrent access to memcache on Google App Engine? I'm trying with LocalServiceTestHelpers and threads but don't have any luck. Every time I try to access Memcache within a thread, then I get this error: ApiProxy$CallNotFoundException: The API package 'memcache' or call 'Increment()' was not found I guess that the testing library of GAE SDK tried to mimic the real environment and thus setup the environment for only one thread (the thread that running the test) which cannot be seen by other thread. Here is a piece of code that can reproduce the problem package org.seamoo.cache.memcacheImpl; import org.testng.Assert; import org.testng.annotations.AfterMethod; import org.testng.annotations.BeforeMethod; import org.testng.annotations.Test; import com.google.appengine.api.memcache.MemcacheService; import com.google.appengine.api.memcache.MemcacheServiceFactory; import com.google.appengine.tools.development.testing.LocalMemcacheServiceTestConfig; import com.google.appengine.tools.development.testing.LocalServiceTestHelper; public class MemcacheTest { LocalServiceTestHelper helper; public MemcacheTest() { LocalMemcacheServiceTestConfig memcacheConfig = new LocalMemcacheServiceTestConfig(); helper = new LocalServiceTestHelper(memcacheConfig); } /** * */ @BeforeMethod public void setUp() { helper.setUp(); } /** * @see LocalServiceTest#tearDown() */ @AfterMethod public void tearDown() { helper.tearDown(); } @Test public void memcacheConcurrentAccess() throws InterruptedException { final MemcacheService service = MemcacheServiceFactory.getMemcacheService(); Runnable runner = new Runnable() { @Override public void run() { // TODO Auto-generated method stub service.increment("test-key", 1L, 1L); try { Thread.sleep(200L); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } service.increment("test-key", 1L, 1L); } }; Thread t1 = new Thread(runner); Thread t2 = new Thread(runner); t1.start(); t2.start(); while (t1.isAlive()) { Thread.sleep(100L); } Assert.assertEquals((Long) (service.get("test-key")), new Long(4L)); } }

    Read the article

  • Readability and IF-block brackets: best practice

    - by MasterPeter
    I am preparing a short tutorial for level 1 uni students learning JavaScript basics. The task is to validate a phone number. The number must not contain non-digits and must be 14 digits long or less. The following code excerpt is what I came up with and I would like to make it as readable as possible. if ( //set of rules for invalid phone number phoneNumber.length == 0 //empty || phoneNumber.length > 14 //too long || /\D/.test(phoneNumber) //contains non-digits ) { setMessageText(invalid); } else { setMessageText(valid); } A simple question I can not quite answer myself and would like to hear your opinions on: How to position the surrounding (outermost) brackets? It's hard to see the difference between a normal and a curly bracket. Do you usually put the last ) on the same line as the last condition? Do you keep the first opening ( on a line by itself? Do you wrap each individual sub-condition in brackets too? Do you align horizontally the first ( with the last ), or do you place the last ) in the same column as the if? Do you keep ) { on a separate line or you place the last ) on the same line with the last sub-condition and then place the opening { on a new line? Or do you just put the ) { on the same line as the last sub-condition? Community wiki. EDIT Please only post opinions regarding the usage and placement of brackets. The code needs not be re-factored. This is for people who have only been introduced to JavaScript a couple of weeks ago. I am not asking for opinions how to write the code so it's shorter or performs better. I would only like to know how do you place brackets around IF-conditions.

    Read the article

  • Neo4j Reading data / performing shortest path calculations on stored data

    - by paddydub
    I'm using the Batch_Insert example to insert Data into the database How can i read this data back from the database. I can't find any examples of how i do this. public static void CreateData() { // create the batch inserter BatchInserter inserter = new BatchInserterImpl( "var/graphdb", BatchInserterImpl.loadProperties( "var/neo4j.props" ) ); Map<String,Object> properties = new HashMap<String,Object>(); properties.put( "name", "Mr. Andersson" ); properties.put( "age", 29 ); long node1 = inserter.createNode( properties ); properties.put( "name", "Trinity" ); properties.remove( "age" ); long node2 = inserter.createNode( properties ); inserter.createRelationship( node1, node2, DynamicRelationshipType.withName( "KNOWS" ), null ); inserter.shutdown(); } I would like to store graph data in the database, graph.makeEdge( "s", "c", "cost", (double) 7 ); graph.makeEdge( "c", "e", "cost", (double) 7 ); graph.makeEdge( "s", "a", "cost", (double) 2 ); graph.makeEdge( "a", "b", "cost", (double) 7 ); graph.makeEdge( "b", "e", "cost", (double) 2 ); Dijkstra<Double> dijkstra = getDijkstra( graph, 0.0, "s", "e" ); What is the best method to store this kind data with 10000's of edges. Then run the Dijskra algorighm to find shortest path calculations using the stored graph data.

    Read the article

  • CIL and JVM Little endian to big endian in c# and java

    - by Haythem
    Hello, I am using on the client C# where I am converting double values to byte array. I am using java on the server and I am using writeDouble and readDouble to convert double values to byte arrays. The problem is the double values from java at the end are not the double values at the begin giving to c# writeDouble in Java Converts the double argument to a long using the doubleToLongBits method , and then writes that long value to the underlying output stream as an 8-byte quantity, high byte first. DoubleToLongBits Returns a representation of the specified floating-point value according to the IEEE 754 floating-point "double format" bit layout. The Program on the server is waiting of 64-102-112-0-0-0-0-0 from C# to convert it to 1700.0 but he si becoming 0000014415464 from c# after c# converted 1700.0 this is my code in c#: class User { double workingStatus; public void persist() { byte[] dataByte; using (MemoryStream ms = new MemoryStream()) { using (BinaryWriter bw = new BinaryWriter(ms)) { bw.Write(workingStatus); bw.Flush(); bw.Close(); } dataByte = ms.ToArray(); for (int j = 0; j < dataByte.Length; j++) { Console.Write(dataByte[j]); } } public double WorkingStatus { get { return workingStatus; } set { workingStatus = value; } } } class Test { static void Main() { User user = new User(); user.WorkingStatus = 1700.0; user.persist(); } thank you for the help.

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >