Search Results

Search found 11675 results on 467 pages for 'parallel testing'.

Page 433/467 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • How best to store Subversion version information in EAR's?

    - by Rene
    When receiving a bug report or an it-doesnt-work message one of my initials questions is always what version? With a different builds being at many stages of testing, planning and deploying this is often a non-trivial question. I the case of releasing Java JAR (ear, jar, rar, war) files I would like to be able to look in/at the JAR and switch to the same branch, version or tag that was the source of the released JAR. How can I best adjust the ant build process so that the version information in the svn checkout remains in the created build? I was thinking along the lines of: adding a VERSION file, but with what content? storing information in the META-INF file, but under what property with which content? copying sources into the result archive added svn:properties to all sources with keywords in places the compiler leaves them be I ended up using the svnversion approach (the accepted anwser), because it scans the entire subtree as opposed to svn info which just looks at the current file / directory. For this I defined the SVN task in the ant file to make it more portable. <taskdef name="svn" classname="org.tigris.subversion.svnant.SvnTask"> <classpath> <pathelement location="${dir.lib}/ant/svnant.jar"/> <pathelement location="${dir.lib}/ant/svnClientAdapter.jar"/> <pathelement location="${dir.lib}/ant/svnkit.jar"/> <pathelement location="${dir.lib}/ant/svnjavahl.jar"/> </classpath> </taskdef> Not all builds result in webservices. The ear file before deployment must remain the same name because of updating in the application server. Making the file executable is still an option, but until then I just include a version information file. <target name="version"> <svn><wcVersion path="${dir.source}"/></svn> <echo file="${dir.build}/VERSION">${revision.range}</echo> </target> Refs: svnrevision: http://svnbook.red-bean.com/en/1.1/re57.html svn info http://svnbook.red-bean.com/en/1.1/re13.html subclipse svn task: http://subclipse.tigris.org/svnant/svn.html svn client: http://svnkit.com/

    Read the article

  • Android - Start service on boot

    - by Gady
    From everything I've seen on Stack Exchange and elsewhere, I have everything set up correctly to start an IntentService when Android OS boots. Unfortunately it is not starting on boot, and I'm not getting any errors. Maybe the experts can help... Manifest: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.phx.batterylogger" android:versionCode="1" android:versionName="1.0" android:installLocation="internalOnly"> <uses-sdk android:minSdkVersion="8" /> <uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.BATTERY_STATS" /> <application android:icon="@drawable/icon" android:label="@string/app_name"> <service android:name=".BatteryLogger"/> <receiver android:name=".StartupIntentReceiver"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> </intent-filter> </receiver> </application> </manifest> BroadcastReceiver for Startup: package com.phx.batterylogger; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; public class StartupIntentReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { Intent serviceIntent = new Intent(context, BatteryLogger.class); context.startService(serviceIntent); } } UPDATE: I tried just about all of the suggestions below, and I added logging such as Log.v("BatteryLogger", "Got to onReceive, about to start service"); to the onReceive handler of the StartupIntentReceiver, and nothing is ever logged. So it isn't even making it to the BroadcastReceiver. I think I'm deploying the APK and testing correctly, just running Debug in Eclipse and the console says it successfully installs it to my Xoom tablet at \BatteryLogger\bin\BatteryLogger.apk. Then to test, I reboot the tablet and then look at the logs in DDMS and check the Running Services in the OS settings. Does this all sound correct, or am I missing something? Again, any help is much appreciated.

    Read the article

  • Why aren't min-width and max-width working as I expect?

    - by Nathan Long
    I'm trying to adjust a CSS page layout using min-width and max-width. To simplify the problem, I made this test page. I'm trying it out in the latest versions of Firefox and Chrome with the same results. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Testing min-width and max-width</title> <style type="text/css"> div{float: left; max-width: 400px; min-width: 200px;} div.a{background: orange;} div.b{background: gray;} </style> </head> <body> <div class="a"> (Giant block of filler text here) </div> <div class="b"> (Giant block of filler text here) </div> </body> </html> Here's what I expect to happen: With the browser maximized, the divs sit side by side, each 400px wide: their maximum width Shrink the browser window, and they both shrink to 200px: their minimum width Further shrinking the browser has no effect on them Here's what actually happens, starting at step 2: Shrink the browser window, and as soon as they can't sit side-by-side at their max width, the second div drops below the first Further shrinking the browser makes them get narrower and narrower, as small as I can make the window So here's are my questions: What does max-width mean if the element will sooner hop down in the layout than go lower than its maximum width? What does min-width mean if the element will happily get narrower than that if the browser window keeps shrinking? Is there any way to achieve what I want: have these elements sit side-by-side, happily shrinking until they reach 200px each, and only then adjust the layout so that the second one drops down? And of course... What am I doing wrong?

    Read the article

  • WCF Service Library - make calls from Console App

    - by inutan
    Hello there, I have a WCF Service Library with netTcpBinding. Its app.config as follows: <configuration> <system.serviceModel> <bindings> <netTcpBinding> <binding name="netTcp" maxBufferPoolSize="50000000" maxReceivedMessageSize="50000000"> <readerQuotas maxDepth="500" maxStringContentLength="50000000" maxArrayLength="50000000" maxBytesPerRead="50000000" maxNameTableCharCount="50000000" /> <security mode="None"></security> </binding> </netTcpBinding> </bindings> <services> <service behaviorConfiguration="ReportingComponentLibrary.TemplateServiceBehavior" name="ReportingComponentLibrary.TemplateReportService"> <endpoint address="TemplateService" binding="netTcpBinding" bindingConfiguration="netTcp" contract="ReportingComponentLibrary.ITemplateService"></endpoint> <endpoint address="ReportService" binding="netTcpBinding" bindingConfiguration="netTcp" contract="ReportingComponentLibrary.IReportService"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" ></endpoint> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:8001/TemplateReportService" /> <add baseAddress ="http://localhost:8080/TemplateReportService" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior name="ReportingComponentLibrary.TemplateServiceBehavior"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> I want to call it from a console application for testing purpose. I understand that I can call by adding Service Reference or by adding proxy using svcutil. But in both these cases, my service needs to be up and running (I used WCF Test Client) Is there any other way I can call and test service method from console application?

    Read the article

  • CLR 4.0 inlining policy? (maybe bug with MethodImplOptions.NoInlining)

    - by ControlFlow
    I've testing some new CLR 4.0 behavior in method inlining (cross-assembly inlining) and found some strage results: Assembly ClassLib.dll: using System.Diagnostics; using System; using System.Reflection; using System.Security; using System.Runtime.CompilerServices; namespace ClassLib { public static class A { static readonly MethodInfo GetExecuting = typeof(Assembly).GetMethod("GetExecutingAssembly"); public static Assembly Foo(out StackTrace stack) // 13 bytes { // explicit call to GetExecutingAssembly() stack = new StackTrace(); return Assembly.GetExecutingAssembly(); } public static Assembly Bar(out StackTrace stack) // 25 bytes { // reflection call to GetExecutingAssembly() stack = new StackTrace(); return (Assembly) GetExecuting.Invoke(null, null); } public static Assembly Baz(out StackTrace stack) // 9 bytes { stack = new StackTrace(); return null; } public static Assembly Bob(out StackTrace stack) // 13 bytes { // call of non-inlinable method! return SomeSecurityCriticalMethod(out stack); } [SecurityCritical, MethodImpl(MethodImplOptions.NoInlining)] static Assembly SomeSecurityCriticalMethod(out StackTrace stack) { stack = new StackTrace(); return Assembly.GetExecutingAssembly(); } } } Assembly ConsoleApp.exe using System; using ClassLib; using System.Diagnostics; class Program { static void Main() { Console.WriteLine("runtime: {0}", Environment.Version); StackTrace stack; Console.WriteLine("Foo: {0}\n{1}", A.Foo(out stack), stack); Console.WriteLine("Bar: {0}\n{1}", A.Bar(out stack), stack); Console.WriteLine("Baz: {0}\n{1}", A.Baz(out stack), stack); Console.WriteLine("Bob: {0}\n{1}", A.Bob(out stack), stack); } } Results: runtime: 4.0.30128.1 Foo: ClassLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null at ClassLib.A.Foo(StackTrace& stack) at Program.Main() Bar: ClassLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null at ClassLib.A.Bar(StackTrace& stack) at Program.Main() Baz: at Program.Main() Bob: ClassLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null at Program.Main() So questions are: Why JIT does not inlined Foo and Bar calls as Baz does? They are lower than 32 bytes of IL and are good candidates for inlining. Why JIT inlined call of Bob and inner call of SomeSecurityCriticalMethod that is marked with the [MethodImpl(MethodImplOptions.NoInlining)] attribute? Why GetExecutingAssembly returns a valid assembly when is called by inlined Baz and SomeSecurityCriticalMethod methods? I've expect that it performs the stack walk to detect the executing assembly, but stack will contains only Program.Main() call and no methods of ClassLib assenbly, to ConsoleApp should be returned.

    Read the article

  • How to view ASMX SOAP using Fiddler2?

    - by outer join
    Does anyone know if Fiddler can display the raw SOAP messages for ASMX web services? I'm testing a simple web service using both Fiddler2 and Storm and the results vary (Fiddler shows plain xml while Storm shows the SOAP messages). See sample request/responses below: Fiddler2 Request: POST /webservice1.asmx/Test HTTP/1.1 Accept: */* Referer: http://localhost.:4164/webservice1.asmx?op=Test Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2; MS-RTC LM 8) Content-Type: application/x-www-form-urlencoded Accept-Encoding: gzip, deflate Host: localhost.:4164 Content-Length: 0 Connection: Keep-Alive Pragma: no-cache Fiddler2 Response: HTTP/1.1 200 OK Server: ASP.NET Development Server/9.0.0.0 Date: Thu, 21 Jan 2010 14:21:50 GMT X-AspNet-Version: 2.0.50727 Cache-Control: private, max-age=0 Content-Type: text/xml; charset=utf-8 Content-Length: 96 Connection: Close <?xml version="1.0" encoding="utf-8"?> <string xmlns="http://tempuri.org/">Hello World</string> Storm Request (body only): <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <Test xmlns="http://tempuri.org/" /> </soap:Body> </soap:Envelope> Storm Response: Status Code: 200 Content Length : 339 Content Type: text/xml; charset=utf-8 Server: ASP.NET Development Server/9.0.0.0 Status Description: OK <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <TestResponse xmlns="http://tempuri.org/"> <TestResult>Hello World</TestResult> </TestResponse> </soap:Body> </soap:Envelope> Thanks for any help.

    Read the article

  • Impersonation on Windows 2000 to Windows XP Leaves Connections Open

    - by Tallek
    I'm running on a Windows 2000 Pro SP4 box (off domain) and trying to impersonate a local user on a Windows XP box (on domain). I'm using code very similar to the WindowsImpersonationContextFacade in the question posted here: http://stackoverflow.com/questions/879704/how-can-i-temporarily-impersonate-a-user-to-open-a-file. I am using impersonation to remotely start and stop windows services as well as access network shares (for some automated integration tests). To get this working, i had to use LOGON32_PROVIDER_DEFAULT and LOGON32_LOGON_NEW_CREDENTIALS when calling LogonUser. Everything worked beautifully ( Windows XP on domain to Windows XP on domain, Windows XP on domain to Windows Server 2003 off domain, and even Windows XP on domain to Windows 2000 off domain). The one issue was running on Windows 2000 Pro SP4 off the domain and trying to impersonate a local user on a Windows XP box running on the domain. To get the Windows 2000 piece working, i had to use LOGON32_PROVIDER_WINNT50 and LOGON32_LOGON_NEW_CREDENTIALS when calling LogonUser. This seemed to get me 95% of the way there, i could now impersonate the local user on the XP box and start/stop services as well as access a network share using the impersonated credentials. I'm running in to one problem though, calling Undo impersonation and closing the token handle seems to leave the connection to the remote box open. After about 10 or so impersonation calls, further impersonation attempts will fail with an error saying something about too many connections are currently open. If i look at the Computer Management - System Tools - Shared Folders - Sessions on my remote Windows XP box, i can see about 10 sessions open to the Windows 2000 box. I can manually close these (i think they may eventually close themselves, but not very quickly) and then impersonation begins working again few more times. This open session issue doesn't seem to be a problem in any of my other test scenarios, just when running locally on a Windows 2000 box. Any ideas? Edit 1: After some more testing and trying out many different things, this seems to be an issue with open sessions not being reused. On Windows 2000 only, every call to LogonUser to get a token and then using that token to impersonate seems to result in a new session being created. I'm guessing Windows XP & Windows Server 2003 are reusing open sessions since i don't seem to be having any issues with them. If I call LogonUser once, then cache the token, I seem to be able to make as many calls to impersonate as I need using the cached token without running in to the "too many connections" issue. This seems like an ugly work around though since i can't call CloseHandle() on my token every time i perform impersonation. Anybody have any thoughts or ideas, or am i stuck with this ugly hack? Thanks

    Read the article

  • Prevent SQL Injection in Dynamic column names

    - by Mr Shoubs
    I can't get away without writing some dynamic sql conditions in a part of my system (using Postgres). My question is how best to avoid SQL Injection with the method I am currently using. EDIT (Reasoning): There are many of columns in a number of tables (a number which grows (only) and is maintained elsewhere). I need a method of allowing the user to decide which (predefined) column they want to query (and if necessary apply string functions to). The query itself is far too complex for the user to write themselves, nor do they have access to the db. There are 1000's of users with varying requirements and I need to remain as flexible as possible - I shouldn't have to revisit the code unless the main query needs to change - Also, there is no way of knowing what conditions the user will need to use. I have objects (received via web service) that generates a condition (the generation method is below - it isn't perfect yet) for some large sql queries. The _FieldName is user editable (parameter name was, but it didn't need to be) and I am worried it could be an attack vector. I put double quotes (see quoted identifier) around the field name in an attempt to sanitize the string, this way it can never be a key word. I could also look up the field name against a list of fields, but it would be difficult to maintain on a timely basis. Unfortunately the user must enter the condition criteria, I am sure there must be more I can add to the sanatize method? and does quoting the column name make it safe? (my limited testing seems to think so). an example built condition would be "AND upper(brandloaded.make) like 'O%' and upper(brandloaded.make) not like 'OTHERBRAND'" ... Any help or suggestions are appreciated. Public Function GetCondition() As String Dim sb As New Text.StringBuilder 'put quote around the table name in an attempt to prevent some sql injection 'http://www.postgresql.org/docs/8.2/static/sql-syntax-lexical.html sb.AppendFormat(" {0} ""{1}"" ", _LogicOperator.ToString, _FieldName) Select Case _ConditionOperator Case ConditionOperatorOptions.Equals sb.Append(" = ") ... End Select sb.AppendFormat(" {0} ", Me.UniqueParameterName) 'for parameter Return Me.Sanitize(sb) End Function Private Function Sanitize(ByVal sb As Text.StringBuilder) As String 'compare against a similar blacklist mentioned here: http://forums.asp.net/t/1254125.aspx sb.Replace(";", "") sb.Replace("'", "") sb.Replace("\", "") sb.Replace(Chr(8), "") Return sb.ToString End Function Public ReadOnly Property UniqueParameterName() As String Get Return String.Concat(":" _UniqueIdentifier) End Get End Property

    Read the article

  • c++ class member functions selected by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched stackoverflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instatiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state info was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice?

    Read the article

  • Is it possible to reliably auto-decode user files to Unicode? [C#]

    - by NVRAM
    I have a web application that allows users to upload their content for processing. The processing engine expects UTF8 (and I'm composing XML from multiple users' files), so I need to ensure that I can properly decode the uploaded files. Since I'd be surprised if any of my users knew their files even were encoded, I have very little hope they'd be able to correctly specify the encoding (decoder) to use. And so, my application is left with task of detecting before decoding. This seems like such a universal problem, I'm surprised not to find either a framework capability or general recipe for the solution. Can it be I'm not searching with meaningful search terms? I've implemented BOM-aware detection (http://en.wikipedia.org/wiki/Byte_order_mark) but I'm not sure how often files will be uploaded w/o a BOM to indicate encoding, and this isn't useful for most non-UTF files. My questions boil down to: Is BOM-aware detection sufficient for the vast majority of files? In the case where BOM-detection fails, is it possible to try different decoders and determine if they are "valid"? (My attempts indicate the answer is "no.") Under what circumstances will a "valid" file fail with the C# encoder/decoder framework? Is there a repository anywhere that has a multitude of files with various encodings to use for testing? While I'm specifically asking about C#/.NET, I'd like to know the answer for Java, Python and other languages for the next time I have to do this. So far I've found: A "valid" UTF-16 file with Ctrl-S characters has caused encoding to UTF-8 to throw an exception (Illegal character?) (That was an XML encoding exception.) Decoding a valid UTF-16 file with UTF-8 succeeds but gives text with null characters. Huh? Currently, I only expect UTF-8, UTF-16 and probably ISO-8859-1 files, but I want the solution to be extensible if possible. My existing set of input files isn't nearly broad enough to uncover all the problems that will occur with live files. Although the files I'm trying to decode are "text" I think they are often created w/methods that leave garbage characters in the files. Hence "valid" files may not be "pure". Oh joy. Thanks.

    Read the article

  • drag to pan on an UserControl

    - by Matías
    Hello, I'm trying to build my own "PictureBox like" control adding some functionalities. For example, I want to be able to pan over a big image by simply clicking and dragging with the mouse. The problem seems to be on my OnMouseMove method. If I use the following code I get the drag speed and precision I want, but of course, when I release the mouse button and try to drag again the image is restored to its original position. using System.Drawing; using System.Windows.Forms; namespace Testing { public partial class ScrollablePictureBox : UserControl { private Image image; private bool centerImage; public Image Image { get { return image; } set { image = value; Invalidate(); } } public bool CenterImage { get { return centerImage; } set { centerImage = value; Invalidate(); } } public ScrollablePictureBox() { InitializeComponent(); SetStyle(ControlStyles.AllPaintingInWmPaint | ControlStyles.OptimizedDoubleBuffer, true); Image = null; AutoScroll = true; AutoScrollMinSize = new Size(0, 0); } private Point clickPosition; private Point scrollPosition; protected override void OnMouseDown(MouseEventArgs e) { base.OnMouseDown(e); clickPosition.X = e.X; clickPosition.Y = e.Y; } protected override void OnMouseMove(MouseEventArgs e) { base.OnMouseMove(e); if (e.Button == MouseButtons.Left) { scrollPosition.X = clickPosition.X - e.X; scrollPosition.Y = clickPosition.Y - e.Y; AutoScrollPosition = scrollPosition; } } protected override void OnPaint(PaintEventArgs e) { base.OnPaint(e); e.Graphics.FillRectangle(new Pen(BackColor).Brush, 0, 0, e.ClipRectangle.Width, e.ClipRectangle.Height); if (Image == null) return; int centeredX = AutoScrollPosition.X; int centeredY = AutoScrollPosition.Y; if (CenterImage) { //Something not relevant } AutoScrollMinSize = new Size(Image.Width, Image.Height); e.Graphics.DrawImage(Image, new RectangleF(centeredX, centeredY, Image.Width, Image.Height)); } } } But if I modify my OnMouseMove method to look like this: protected override void OnMouseMove(MouseEventArgs e) { base.OnMouseMove(e); if (e.Button == MouseButtons.Left) { scrollPosition.X += clickPosition.X - e.X; scrollPosition.Y += clickPosition.Y - e.Y; AutoScrollPosition = scrollPosition; } } ... you will see that the dragging is not smooth as before, and sometimes behaves weird (like with lag or something). What am I doing wrong? I've also tried removing all "base" calls on a desperate movement to solve this issue, haha, but again, it didn't work. Thanks for your time.

    Read the article

  • Why does Git.pm on cygwin complain about 'Out of memory during "large" request?

    - by Charles Ma
    Hi, I'm getting this error while doing a git svn rebase in cygwin Out of memory during "large" request for 268439552 bytes, total sbrk() is 140652544 bytes at /usr/lib/perl5/site_perl/Git.pm line 898, <GEN1> line 3. 268439552 is 256MB. Cygwin's maxium memory size is set to 1024MB so I'm guessing that it has a different maximum memory size for perl? How can I increase the maximum memory size that perl programs can use? update: This is where the error occurs (in Git.pm): while (1) { my $bytesLeft = $size - $bytesRead; last unless $bytesLeft; my $bytesToRead = $bytesLeft < 1024 ? $bytesLeft : 1024; my $read = read($in, $blob, $bytesToRead, $bytesRead); //line 898 unless (defined($read)) { $self->_close_cat_blob(); throw Error::Simple("in pipe went bad"); } $bytesRead += $read; } I've added a print before line 898 to print out $bytesToRead and $bytesRead and the result was 1024 for $bytesToRead, and 134220800 for $bytesRead, so it's reading 1024 bytes at a time and it has already read 128MB. Perl's 'read' function must be out of memory and is trying to request for double it's memory size...is there a way to specify how much memory to request? or is that implementation dependent? UPDATE2: While testing memory allocation in cygwin: This C program's output was 1536MB int main() { unsigned int bit=0x40000000, sum=0; char *x; while (bit > 4096) { x = malloc(bit); if (x) sum += bit; bit >>= 1; } printf("%08x bytes (%.1fMb)\n", sum, sum/1024.0/1024.0); return 0; } While this perl program crashed if the file size is greater than 384MB (but succeeded if the file size was less). open(F, "<400") or die("can't read\n"); $size = -s "400"; $read = read(F, $s, $size); The error is similar Out of memory during "large" request for 536875008 bytes, total sbrk() is 217088 bytes at mem.pl line 6.

    Read the article

  • Closure Tables - Is this enough data to display a tree view?

    - by James Pitt
    Here is the table I have created by testing the closure table method. | id | parentId | childId | hops | | | | | 270 | 6 | 6 | 0 | 271 | 7 | 7 | 0 | 272 | 8 | 8 | 0 | 273 | 9 | 9 | 0 | 276 | 10 | 10 | 0 | 281 | 9 | 10 | 1 | 282 | 7 | 9 | 1 | 283 | 7 | 10 | 2 | 285 | 7 | 8 | 1 | 286 | 6 | 7 | 1 | 287 | 6 | 9 | 2 | 288 | 6 | 10 | 3 | 289 | 6 | 8 | 2 | 293 | 6 | 9 | 1 | 294 | 6 | 10 | 2 I am trying to create a simple tree of this using PHP. There does not seem to be enough data to create the table. For example, when I look purely at parentId = 6: -Part 6 -Part 7 - ? - ? -Part 9 - ? - ? We know that parts 8 and 10 exists below Part 7 or 9, but not which. We know that part 10 exists at both 3 and 4 nodes deep but where? If I look at other data in the table it is possible to tell it should be: - Part 6 - Part 7 - Part 9 - Part 10 - Part 9 - Part 10 I thought one of the benefits of closure tables was there was no need for recursive queries? Could you help explain what I am doing wrong? EDIT: For clarification, this is a mapping table. There is another table called "parts" which has a column called part_id that correlates to both the parentId and childId columns in the "closure" table. The "id" column in the table above (closure) is just for the purposes of maintaining a primary key. It is not really necessary. The methods I have used to create this closure table is described in the following article: http://dirtsimple.org/2010/11/simplest-way-to-do-tree-based-queries.html EDIT2: It can have two and three hops. I will explain easier by assigning names to the items. Part 6 = Bicycle Part 7 = Gears Part 8 = Chain Part 9 = Bolt Part 10 = Nut Nut is part of Bolt. The Bolt and Nut combo exists directly within Bicycle and within Gears which is part of Bicycle. In relation to what method to use I have looked at Adjacency, Edges, Enum Paths, Closures, DAGS(networks) and the Nested Set Model. I am still trying to work out what is what, but this is an extremely complex component database where there are multiple parents and any modification to a sub-tree must propogate through the other trees. More importantly there will be insertions, deletions and tree views that I wish to avoid recursion during general use, even at the cost of database space and query time during entry.

    Read the article

  • JPA Entity Manager resource handling

    - by chiragshahkapadia
    Every time I call JPA method its creating entity and binding query. My persistence properties are: <property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect"/> <property name="hibernate.cache.provider_class" value="net.sf.ehcache.hibernate.SingletonEhCacheProvider"/> <property name="hibernate.cache.use_second_level_cache" value="true"/> <property name="hibernate.cache.use_query_cache" value="true"/> And I am creating entity manager the way shown below: emf = Persistence.createEntityManagerFactory("pu"); em = emf.createEntityManager(); em = Persistence.createEntityManagerFactory("pu").createEntityManager(); Is there any nice way to manage entity manager resource instead create new every time or any property can set in persistence. Remember it's JPA. See below binding log every time : 15:35:15,527 INFO [AnnotationBinder] Binding entity from annotated class: * 15:35:15,527 INFO [QueryBinder] Binding Named query: * = * 15:35:15,527 INFO [QueryBinder] Binding Named query: * = * 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [EntityBinder] Bind entity com.* on table * 15:35:15,542 INFO [HibernateSearchEventListenerRegister] Unable to find org.hibernate.search.event.FullTextIndexEventListener on the classpath. Hibernate Search is not enabled. 15:35:15,542 INFO [NamingHelper] JNDI InitialContext properties:{} 15:35:15,542 INFO [DatasourceConnectionProvider] Using datasource: 15:35:15,542 INFO [SettingsFactory] RDBMS: and Real Application Testing options 15:35:15,542 INFO [SettingsFactory] JDBC driver: Oracle JDBC driver, version: 9.2.0.1.0 15:35:15,542 INFO [Dialect] Using dialect: org.hibernate.dialect.Oracle10gDialect 15:35:15,542 INFO [TransactionFactoryFactory] Transaction strategy: org.hibernate.transaction.JDBCTransactionFactory 15:35:15,542 INFO [TransactionManagerLookupFactory] No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recomm ended) 15:35:15,542 INFO [SettingsFactory] Automatic flush during beforeCompletion(): disabled 15:35:15,542 INFO [SettingsFactory] Automatic session close at end of transaction: disabled 15:35:15,542 INFO [SettingsFactory] JDBC batch size: 15 15:35:15,542 INFO [SettingsFactory] JDBC batch updates for versioned data: disabled 15:35:15,542 INFO [SettingsFactory] Scrollable result sets: enabled 15:35:15,542 INFO [SettingsFactory] JDBC3 getGeneratedKeys(): disabled 15:35:15,542 INFO [SettingsFactory] Connection release mode: auto 15:35:15,542 INFO [SettingsFactory] Default batch fetch size: 1 15:35:15,542 INFO [SettingsFactory] Generate SQL with comments: disabled 15:35:15,542 INFO [SettingsFactory] Order SQL updates by primary key: disabled 15:35:15,542 INFO [SettingsFactory] Order SQL inserts for batching: disabled 15:35:15,542 INFO [SettingsFactory] Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 15:35:15,542 INFO [ASTQueryTranslatorFactory] Using ASTQueryTranslatorFactory 15:35:15,542 INFO [SettingsFactory] Query language substitutions: {} 15:35:15,542 INFO [SettingsFactory] JPA-QL strict compliance: enabled 15:35:15,542 INFO [SettingsFactory] Second-level cache: enabled 15:35:15,542 INFO [SettingsFactory] Query cache: enabled 15:35:15,542 INFO [SettingsFactory] Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge 15:35:15,542 INFO [RegionFactoryCacheProviderBridge] Cache provider: net.sf.ehcache.hibernate.SingletonEhCacheProvider 15:35:15,542 INFO [SettingsFactory] Optimize cache for minimal puts: disabled 15:35:15,542 INFO [SettingsFactory] Structured second-level cache entries: disabled 15:35:15,542 INFO [SettingsFactory] Query cache factory: org.hibernate.cache.StandardQueryCacheFactory 15:35:15,542 INFO [SettingsFactory] Statistics: disabled 15:35:15,542 INFO [SettingsFactory] Deleted entity synthetic identifier rollback: disabled 15:35:15,542 INFO [SettingsFactory] Default entity-mode: pojo 15:35:15,542 INFO [SettingsFactory] Named query checking : enabled 15:35:15,542 INFO [SessionFactoryImpl] building session factory 15:35:15,542 INFO [SessionFactoryObjectFactory] Not binding factory to JNDI, no JNDI name configured 15:35:15,542 INFO [UpdateTimestampsCache] starting update timestamps cache at region: org.hibernate.cache.UpdateTimestampsCache 15:35:15,542 INFO [StandardQueryCache] starting query cache at region: org.hibernate.cache.StandardQueryCache

    Read the article

  • Decrypting data from a secure socket

    - by Ronald
    I'm working on a server application in Java. I've successfully got past the handshake portion of the communication process, but how do I go about decrypting my input stream? Here is how I set up my server: import java.io.IOException; import java.net.ServerSocket; import java.net.Socket; import java.util.ArrayList; import java.util.HashMap; import javax.net.ServerSocketFactory; import javax.net.ssl.SSLServerSocket; import javax.net.ssl.SSLServerSocketFactory; import org.json.me.JSONException; import dictionary.Dictionary; public class Server { private static int port = 1234; public static void main(String[] args) throws JSONException { System.setProperty("javax.net.ssl.keyStore", "src/my.keystore"); System.setProperty("javax.net.ssl.keyStorePassword", "test123"); System.out.println("Starting server on port: " + port); HashMap<String, Game> games = new HashMap<String, Game>(); final String[] enabledCipherSuites = { "SSL_RSA_WITH_RC4_128_SHA" }; try{ SSLServerSocketFactory socketFactory = (SSLServerSocketFactory) SSLServerSocketFactory.getDefault(); SSLServerSocket listener = (SSLServerSocket) socketFactory.createServerSocket(port); listener.setEnabledCipherSuites(enabledCipherSuites); Socket server; Dictionary dict = new Dictionary(); Game game = new Game(dict); //for testing, creates 1 global game. while(true){ server = listener.accept(); ClientConnection conn = new ClientConnection(server, game, "User"); Thread t = new Thread(conn); t.start(); } } catch(IOException e){ System.out.println("Failed setting up on port: " + port); e.printStackTrace(); } } } I used a BufferedReader to get the data send from the client: BufferedReader d = new BufferedReader(new InputStreamReader(socket.getInputStream())); After the handshake is complete it appears like I'm getting encrypted data. I did some research online and it seems like I might need to use a Cipher, but I'm not sure. Any ideas?

    Read the article

  • Is this an idiomatic way to pass mocks into objects?

    - by Billy ONeal
    I'm a bit confused about passing in this mock class into an implementation class. It feels wrong to have all this explicitly managed memory flying around. I'd just pass the class by value but that runs into the slicing problem. Am I missing something here? Implementation: namespace detail { struct FileApi { virtual HANDLE CreateFileW( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile ) { return ::CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } virtual void CloseHandle(HANDLE handleToClose) { ::CloseHandle(handleToClose); } }; } class File : boost::noncopyable { HANDLE hWin32; boost::scoped_ptr<detail::FileApi> fileApi; public: File( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile, __in detail::FileApi * method = new detail::FileApi() ) { fileApi.reset(method); hWin32 = fileApi->CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } }; namespace detail { struct FileApi { virtual HANDLE CreateFileW( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile ) { return ::CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } virtual void CloseHandle(HANDLE handleToClose) { ::CloseHandle(handleToClose); } }; } class File : boost::noncopyable { HANDLE hWin32; boost::scoped_ptr<detail::FileApi> fileApi; public: File( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile, __in detail::FileApi * method = new detail::FileApi() ) { fileApi.reset(method); hWin32 = fileApi->CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } ~File() { fileApi->CloseHandle(hWin32); } }; Tests: namespace detail { struct MockFileApi : public FileApi { MOCK_METHOD7(CreateFileW, HANDLE(LPCWSTR, DWORD, DWORD, LPSECURITY_ATTRIBUTES, DWORD, DWORD, HANDLE)); MOCK_METHOD1(CloseHandle, void(HANDLE)); }; } using namespace detail; using namespace testing; TEST(Test_File, OpenPassesArguments) { MockFileApi * api = new MockFileApi; EXPECT_CALL(*api, CreateFileW(Eq(L"BozoFile"), Eq(56), Eq(72), Eq(reinterpret_cast<LPSECURITY_ATTRIBUTES>(67)), Eq(98), Eq(102), Eq(reinterpret_cast<HANDLE>(98)))) .Times(1).WillOnce(Return(reinterpret_cast<HANDLE>(42))); File test(L"BozoFile", 56, 72, reinterpret_cast<LPSECURITY_ATTRIBUTES>(67), 98, 102, reinterpret_cast<HANDLE>(98), api); }

    Read the article

  • iphone Odd Problem when using a custom cell

    - by Brodie4598
    Please note where I have the NSLOG. All it is displaying in the log is the first three items in the nameSection. After some testing, I discovered it is displaying how many keys there are because if I add a key to the plist, it will log a fourth item in log. nameSection should be an array of the strings that make up the key array in the plist file. the plist file has 3 dictionaries, each with several arrays of strings. The code picks the dictionary I am working with correctly, then should use the array names as sections in the table and the strings en each array as what to display in each cell. so if the dictionary i am working with has 3 arrays, NSLOG will display 3 strings from the first array: 2010-05-01 17:03:26.957 Checklists[63926:207] string0 2010-05-01 17:03:26.960 Checklists[63926:207] string1 2010-05-01 17:03:26.962 Checklists[63926:207] string2 then stop with: 2010-05-01 17:03:26.963 Checklists[63926:207] * Terminating app due to uncaught exception 'NSRangeException', reason: '* -[NSCFArray objectAtIndex:]: index (3) beyond bounds (3)' if i added an array to the dictionary, it log 4 items instead of 3. I hope this explanation makes sense... -(NSInteger)numberOfSectionsInTableView:(UITableView *)tableView{ return [keys count]; } -(NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger) section { NSString *key = [keys objectAtIndex:section]; NSArray *nameSection = [names objectForKey:key]; return [nameSection count]; } -(UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { NSUInteger section = [indexPath section]; NSString *key = [keys objectAtIndex: section]; NSArray *nameSection = [names objectForKey:key]; static NSString *SectionsTableIdentifier = @"SectionsTableIdentifier"; static NSString *ChecklistCellIdentifier = @"ChecklistCellIdentifier "; ChecklistCell *cell = (ChecklistCell *)[tableView dequeueReusableCellWithIdentifier: SectionsTableIdentifier]; if (cell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"ChecklistCell" owner:self options:nil]; for (id oneObject in nib) if ([oneObject isKindOfClass:[ChecklistCell class]]) cell = (ChecklistCell *)oneObject; } NSUInteger row = [indexPath row]; NSDictionary *rowData = [self.keys objectAtIndex:row]; NSString *tempString = [[NSString alloc]initWithFormat:@"%@",[nameSection objectAtIndex:row]]; NSLog(@"%@",tempString); cell.colorLabel.text = [tempArray objectAtIndex:0]; cell.nameLabel.text = [tempArray objectAtIndex:1]; return cell; return cell; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = [tableView cellForRowAtIndexPath:indexPath]; if (cell.accessoryType == UITableViewCellAccessoryNone) { cell.accessoryType = UITableViewCellAccessoryCheckmark; } else if (cell.accessoryType == UITableViewCellAccessoryCheckmark) { cell.accessoryType = UITableViewCellAccessoryNone; } [tableView deselectRowAtIndexPath:indexPath animated:NO]; } -(NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section{ NSString *key = [keys objectAtIndex:section]; return key; }

    Read the article

  • Custom Tag implementation issue

    - by Appps
    I have a custom tag as follows. repeat and heading tag have doAfterBody method implemented. <csajsp:repeat reps="5"> <LI> <csajsp:heading bgColor="BLACK"> White on Black Heading </csajsp:heading> </LI> </csajsp:repeat> /* Repeat tag Class*/ public void setReps(String repeats) { System.out.println("TESTING"+repeats); //sets the reps variable. } public int doAfterBody() { System.out.println("Inside repeate tag"+reps); if (reps-- >= 1) { BodyContent body = getBodyContent(); try { JspWriter out = body.getEnclosingWriter(); System.out.println("BODY"+body.getString()); out.println(body.getString()); body.clearBody(); // Clear for next evaluation } catch(IOException ioe) { System.out.println("Error in RepeatTag: " + ioe); } return(EVAL_BODY_TAG); } else { return(SKIP_BODY); } } /* Class of Heading tag */ public int doAfterBody() { System.out.println("inside heading tag"); BodyContent body = getBodyContent(); System.out.println(body.getString()); try { JspWriter out = body.getEnclosingWriter(); out.print("NEW TEXT"); } catch(IOException ioe) { System.out.println("Error in FilterTag: " + ioe); } // SKIP_BODY means I'm done. If I wanted to evaluate // and handle the body again, I'd return EVAL_BODY_TAG. return(SKIP_BODY); } public int doEndTag() { try { JspWriter out = pageContext.getOut(); out.print("NEW TEXT 2"); } catch(IOException ioe) { System.out.println("Error in HeadingTag: " + ioe); } return(EVAL_PAGE); // Continue with rest of JSP page } The order in which SOP are printed is 1) Setter method of csajsp:repeat is called. 2) White on Black Heading is printed. ie doAfterBody of csajsp:heading tag is called. I don't know why it is not calling doAfterBody of csajsp:repeat tag. Please help me to understand this. Thanks in advance.

    Read the article

  • What Amazon S3 .NET Library is most useful and efficient?

    - by Geo
    There are two main open source .net Amazon S3 libraries. Three Sharp LitS3 I am currently using LitS3 in our MVC demo project, but there is some criticism about it. Has anyone here used both libraries so they can give an objective point of view. Below some sample calls using LitS3: On demo controller: private S3Service s3 = new S3Service() { AccessKeyID = "Thekey", SecretAccessKey = "testing" }; public ActionResult Index() { ViewData["Message"] = "Welcome to ASP.NET MVC!"; return View("Index",s3.GetAllBuckets()); } On demo view: <% foreach (var item in Model) { %> <p> <%= Html.Encode(item.Name) %> </p> <% } %> EDIT 1: Since I have to keep moving and there is no clear indication of what library is more effective and kept more up to date, I have implemented a repository pattern with an interface that will allow me to change library if I need to in the future. Below is a section of the S3Repository that I have created and will let me change libraries in case I need to: using LitS3; namespace S3Helper.Models { public class S3Repository : IS3Repository { private S3Service _repository; #region IS3Repository Members public IQueryable<Bucket> FindAllBuckets() { return _repository.GetAllBuckets().AsQueryable(); } public IQueryable<ListEntry> FindAllObjects(string BucketName) { return _repository.ListAllObjects(BucketName).AsQueryable(); } #endregion If you have any information about this question please let me know in a comment, and I will get back and edit the question. EDIT 2: Since this question is not getting attention, I integrated both libraries in my web app to see the differences in design, I know this is probably a waist of time, but I really want a good long run solution. Below you will see two samples of the same action with the two libraries, maybe this will motivate some of you to let me know your thoughts. WITH THREE SHARP LIBRARY: public IQueryable<T> FindAllBuckets<T>() { List<string> list = new List<string>(); using (BucketListRequest request = new BucketListRequest(null)) using (BucketListResponse response = service.BucketList(request)) { XmlDocument bucketXml = response.StreamResponseToXmlDocument(); XmlNodeList buckets = bucketXml.SelectNodes("//*[local-name()='Name']"); foreach (XmlNode bucket in buckets) { list.Add(bucket.InnerXml); } } return list.Cast<T>().AsQueryable(); } WITH LITS3 LIBRARY: public IQueryable<T> FindAllBuckets<T>() { return _repository.GetAllBuckets() .Cast<T>() .AsQueryable(); }

    Read the article

  • Flash Buttons Don't Work: TypeError: Error #1009: Cannot access a property or method of a null objec

    - by goldenfeelings
    I've read through several threads about this error, but haven't been able to apply it to figure out my situation... My flash file is an approx 5 second animation. Then, the last keyframe of each layer (frame #133) has a button in it. My flash file should stop on this last key frame, and you should be able to click on any of the 6 buttons to navigate to another html page in my website. Here is the Action Script that I have applied to the frame in which the buttons exist (on a separate layer, see screenshot at: http://www.footprintsfamilyphoto.com/wp-content/themes/Footprints/images/flash_buttonissue.jpg stop (); function babieschildren(event:MouseEvent):void { trace("babies children method was called!!!"); var targetURL:URLRequest = new URLRequest("http://www.footprintsfamilyphoto.com/portfolio/babies-children"); navigateToURL(targetURL, "_self"); } bc_btn1.addEventListener(MouseEvent.CLICK, babieschildren); bc_btn2.addEventListener(MouseEvent.CLICK, babieschildren); function fams(event:MouseEvent):void { trace("families method was called!!!"); var targetURL:URLRequest = new URLRequest("http://www.footprintsfamilyphoto.com/portfolio/families"); navigateToURL(targetURL, "_self"); } f_btn1.addEventListener(MouseEvent.CLICK, fams); f_btn2.addEventListener(MouseEvent.CLICK, fams); function couplesweddings(event:MouseEvent):void { trace("couples weddings method was called!!!"); var targetURL:URLRequest = new URLRequest("http://www.footprintsfamilyphoto.com/portfolio/couples-weddings"); navigateToURL(targetURL, "_self"); } cw_btn1.addEventListener(MouseEvent.CLICK, couplesweddings); cw_btn2.addEventListener(MouseEvent.CLICK, couplesweddings); When I test the movie, I get this error in the output box: "TypeError: Error #1009: Cannot access a property or method of a null object reference." The test movie does stop on the appropriate frame, but the buttons don't do anything (no URL is opened, and the trace statements don't show up in the output box when the buttons are clicked on the test movie). You can view the .swf file here: www.footprintsfamilyphoto.com/portfolio I'm confident that all 6 buttons do exist in the appropriate frame (frame 133), so I don't think that's what's causing the 1009 error. I also tried deleting each of the three function/addEventListener sections one at a time and testing, and I still got the 1009 error every time. If I delete ALL of the action script except for the "stop ()" line, then I do NOT get the 1009 error. Any ideas?? I'm very new to Flash, so if I haven't clarified something that I need to, let me know!

    Read the article

  • javascript simple object creation test: opera leaks?

    - by joe
    Hi, I am trying to figure out certain memory leak conditions in javascript on a few browsers. Currently I'm only testing FF 3.6, Opera 10.10, and Safari 4.0.3. I've started with a fairly simple test, and can confirm no memory leaks in Firefox and Safari. But Opera just takes memory and never gives it back. What gives? Here's the test: <html> <head> <script type="text/javascript"> window.onload = init; //window.onunload = cleanup; var a=[]; function init() { var d = document.createElement('div'); d.innerHTML = "page loading..."; document.body.appendChild(d); for (var i=0; i<400000; i++) { a[i] = new Obj("xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"); } d.innerHTML = "PAGE LOADED"; } function cleanup() { for (var i=0; i<400000; i++) { a[i] = null; } } function Obj(msg) { this.msg=msg; } </script> </head> <body> </body> </html> I shouldn't need the cleanup() call on window.unload, but tried that also. No luck. As you can see this is simple JS, no circular DOM links, no closures. I monitor the memory usage using 'top' on Mac 10.4.11. Memory usage spikes up on page load, as expected. In FF and Safari reloading the page does not use any further memory, and all memory is returned when the window (tab) is closed. In Opera, memory spikes on load, and seems to also spike further on each reload (but not always...). But regardless of reload, memory never goes back down below the initial load spike. I had hoped this was a no-brainer test that all browsers would pass, so I could move on to more "interesting" conditions. Am I doing something wrong here? Or is this a known Opera issue? Thanks! -joe

    Read the article

  • Watching setTimeout loops so that only one is running at a time.

    - by DA
    I'm creating a content rotator in jQuery. 5 items total. Item 1 fades in, pauses 10 seconds, fades out, then item 2 fades in. Repeat. Simple enough. Using setTimeout I can call a set of functions that create a loop and will repeat the process indefinitely. I now want to add the ability to interrupt this rotator at any time by clicking on a navigation element to jump directly to one of the content items. I originally started going down the path of pinging a variable constantly (say every half second) that would check to see if a navigation element was clicked and, if so, abandon the loop, then restart the loop based on the item that was clicked. The challenge I ran into was how to actually ping a variable via a timer. The solution is to dive into JavaScript closures...which are a little over my head but definitely something I need to delve into more. However, in the process of that, I came up with an alternative option that actually seems to be better performance-wise (theoretically, at least). I have a sample running here: http://jsbin.com/uxupi/14 (It's using console.log so have fireBug running) Sample script: $(document).ready(function(){ var loopCount = 0; $('p#hello').click(function(){ loopCount++; doThatThing(loopCount); }) function doThatOtherThing(currentLoopCount) { console.log('doThatOtherThing-'+currentLoopCount); if(currentLoopCount==loopCount){ setTimeout(function(){doThatThing(currentLoopCount)},5000) } } function doThatThing(currentLoopCount) { console.log('doThatThing-'+currentLoopCount); if(currentLoopCount==loopCount){ setTimeout(function(){doThatOtherThing(currentLoopCount)},5000); } } }) The logic being that every click of the trigger element will kick off the loop passing into itself a variable equal to the current value of the global variable. That variable gets passed back and forth between the functions in the loop. Each click of the trigger also increments the global variable so that subsequent calls of the loop have a unique local variable. Then, within the loop, before the next step of each loop is called, it checks to see if the variable it has still matches the global variable. If not, it knows that a new loop has already been activated so it just ends the existing loop. Thoughts on this? Valid solution? Better options? Caveats? Dangers? UPDATE: I'm using John's suggestion below via the clearTimeout option. However, I can't quite get it to work. The logic is as such: var slideNumber = 0; var timeout = null; function startLoop(slideNumber) { ...do stuff here to set up the slide based on slideNumber... slideFadeIn() } function continueCheck(){ if (timeout != null) { // cancel the scheduled task. clearTimeout(timeout); timeout = null; return false; }else{ return true; } }; function slideFadeIn() { if (continueCheck){ // a new loop hasn't been called yet so proceed... // fade in the LI $currentListItem.fadeIn(fade, function() { if(multipleFeatures){ timeout = setTimeout(slideFadeOut,display); } }); }; function slideFadeOut() { if (continueLoop){ // a new loop hasn't been called yet so proceed... slideNumber=slideNumber+1; if(slideNumber==features.length) { slideNumber = 0; }; timeout = setTimeout(function(){startLoop(slideNumber)},100); }; startLoop(slideNumber); The above kicks of the looping. I then have navigation items that, when clicked, I want the above loop to stop, then restart with a new beginning slide: $(myNav).click(function(){ clearTimeout(timeout); timeout = null; startLoop(thisItem); }) If I comment out 'startLoop...' from the click event, it, indeed, stops the initial loop. However, if I leave that last line in, it doesn't actually stop the initial loop. Why? What happens is that both loops seem to run in parallel for a period. So, when I click my navigation, clearTimeout is called, which clears it.

    Read the article

  • Getting <divs>'s to align next to each other

    - by user1322845
    I have the following code I am trying to get working correctly: <div id="newspost_bg"> <article> <p> <header><h3>The fast red fox!</h3></header> This is where the article resides in the article tag. This is good for SEO optimization. <footer>Read More..</footer> </p> </article> </div> <div id="newspost_bg"> hello </div> <div id="newspost_bg"> hello </div> <div id="advertisement"> <script type="text/javascript"><!-- google_ad_client = "ca-pub-2139701283631933"; /* testing site */ google_ad_slot = "4831288817"; google_ad_width = 120; google_ad_height = 600; //--> </script> </div> Here is the css that goes with it: #newspost_bg{ position: relative; background-color: #d9dde1; width:700px; height:250px; margin: 10px; margin-left: 20px; border: solid 10px #1d2631; float:left; } #newspost_bg article{ position: relative; margin-left: 20px; } #advertisement{ float: left; background-color: #d9dde1; width: 125px; height: 605px; margin: 10px; } The problem I'm experiencing is that the advertisements im trying to get setup will align with the last with the id of newspost_bg but im looking to havce it align to the top of the container it is in. I dont know if this is enough info, if not please let me know what you might need. Im new to the web coding scene so any and all critiques help me.

    Read the article

  • How to enable UAC prompt through programming

    - by peter
    I want to implement a UAC prompt for an application in visualc++ the operating system is 32bit x7460(2processor) Windowsserver 2008 the exe is myproject.exe through manifest.. Here for testing i wl build the application in Windows XP OS and copy the exe in to system containg the Windowsserver 2008 machine and replace it So what i did is i added a manifest like this name of that is myproject.exe.manifest My project has 3 folders like Headerfile,Resourcefile and Source file.I added this manifest(myproject.exe.manifest) in the Sourcefile folder containing other cpp and c code <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="11.1.4.0" processorArchitecture="X7460" name="myproject" type="win32"/> <description>myproject Problem</description> <!-- Identify the application security requirements. --> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2"> <security> <requestedPrivileges> <requestedExecutionLevel level="requireAdministrator" uiAccess="false"/> </requestedPrivileges> </security> </trustInfo> </assembly> then i added this line of code in Resourcefile(.rc).Means one header file is there(Myproject.h).I added the line of code there #define MANIFEST_RESOURCE_ID 1 MANIFEST_RESOURCE_ID RT_MANIFEST "myproject.exe.manifest" Finally i did the following step 1. Open your project in Microsoft Visual Studio 2005. 2. Under Project, select Properties. 3. In Properties, select Manifest Tool, and then select Input and Output. 4. Add in the name of your application manifest file under Additional manifest files. 5. Rebuild your application. But i am getting lot of Syntax errors Is there any problems in the way which i followed.If i commented the line define MANIFEST_RESOURCE_ID 1 MANIFEST_RESOURCE_ID RT_MANIFEST "myproject.exe.manifest" which added in Myproject.h for adding values in .rc file there willnot any error other than this general error c1010070: Failed to load and parse the manifest. The system cannot find the file specified. .\myproject.exe.manifest How to enable UAC prompt through programming

    Read the article

  • How to successfully implement og:image for the LinkedIn

    - by Sabo
    THE PROBLEM: I am trying, without much success, to implement open graph image on site: http://www.guarenty-group.com/cz/ The homepage is completeply bypassing the og:image tag, where internal pages are reading all images from the site and place og:image as the last option. Other social networks are working fine on both internal pages and homepage. THE CONFIGURATION: I have no share buttons or alike, all I want is to be able to share the link via my profile. The image is well over 300x300px: http://guarenty-group.com/img/gg_seal.png Here is how my head tag looks like: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Guarenty Group : Pojištení pro nájemce a pronajímatelé</title> <meta name="keywords" content="" /> <meta name="description" content="Guarenty Group pojištuje príjem z nájmu pronajímatelum, kauci nájemcum - aby nemuseli platit velkou cástku v hotovostí predem - a dále nájemcum pojištuje príjmy, aby meli na nájem pri nemoci, úrazu ci nezamestnání." /> <meta name="image_src" content="http://guarenty-group.com/img/gg_seal.png" /> <meta name="image_url" content="http://guarenty-group.com/img/gg_seal.png" /> <meta property="og:title" content="Pojištení pro nájemce a pronajímatelé" /> <meta property="og:url" content="http://guarenty-group.com/cz/" /> <meta property="og:image" content="http://guarenty-group.com/img/gg_seal.png" /> <meta property="og:description" content="Guarenty Group pojištuje príjem z nájmu pronajímatelum, kauci nájemcum - aby nemuseli platit velkou cástku v hotovostí predem - a dále nájemcum pojištuje príjmy, aby meli na nájem pri nemoci, úrazu ci nezamestnání [...]" /> ... </head> THE TESTING RESULTS: In order to trick the cache i have tested the site with http://www.guarenty-group.com/cz/?try=N, where I have changed the N every time. The strange thing is that images found for different value of N is different. Sometimes there is no image, sometimes there is 1, 2 or 3 images, but each time there is a different set of images. But, in any case I could not find the image specified in the og:graph! MY QUESTIONS: https://developer.linkedin.com/documents/setting-display-tags-shares is saying one thing, and the personnel on the support forum is saying "over 300" Does anyone know What is the official minimum dimension of the image (both w and h)? Can an image be too large? Should I use the xmlns, should I not use xmlns or it doesn't matter? What are the maximum (and minimum) lengths for og:title and og:description tags? Any other suggestion is of course welcomed :) Thanks in advance, cheers~

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >