Search Results

Search found 13653 results on 547 pages for 'integration testing'.

Page 508/547 | < Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >

  • How to delete multiple files with msbuild/web deployment project?

    - by Alex
    I have an odd issue with how msbuild is behaving with a VS2008 Web Deployment Project and would like to know why it seems to randomly misbehave. I need to remove a number of files from a deployment folder that should only exist in my development environment. The files have been generated by the web application during dev/testing and are not included in my Visual Studio project/solution. The configuration I am using is as follows: <!-- Partial extract from Microsoft Visual Studio 2008 Web Deployment Project --> <ItemGroup> <DeleteAfterBuild Include="$(OutputPath)data\errors\*.xml" /> <!-- Folder 1: 36 files --> <DeleteAfterBuild Include="$(OutputPath)data\logos\*.*" /> <!-- Folder 2: 2 files --> <DeleteAfterBuild Include="$(OutputPath)banners\*.*" /> <!-- Folder 3: 1 file --> </ItemGroup> <Target Name="AfterBuild"> <Message Text="------ AfterBuild process starting ------" Importance="high" /> <Delete Files="@(DeleteAfterBuild)"> <Output TaskParameter="DeletedFiles" PropertyName="deleted" /> </Delete> <Message Text="DELETED FILES: $(deleted)" Importance="high" /> <Message Text="------ AfterBuild process complete ------" Importance="high" /> </Target> The problem I have is that when I do a build/rebuild of the Web Deployment Project it "sometimes" removes all the files but other times it will not remove anything! Or it will remove only one or two of the three folders in the DeleteAfterBuild item group. There seems to be no consistency in when the build process decides to remove the files or not. When I've edited the configuration to include only Folder 1 (for example), it removes all the files correctly. Then adding Folder 2 and 3, it starts removing all the files as I want. Then, seeming at random times, I'll rebuild the project and it won't remove any of the files! I have tried moving these items to the ExcludeFromBuild item group (which is probably where it should be) but it gives me the same unpredictable result. Has anyone experienced this? Am I doing something wrong? Why does this happen?

    Read the article

  • Segmenting a double array of labels

    - by Ami
    The Problem: I have a large double array populated with various labels. Each element (cell) in the double array contains a set of labels and some elements in the double array may be empty. I need an algorithm to cluster elements in the double array into discrete segments. A segment is defined as a set of pixels that are adjacent within the double array and one label that all those pixels in the segment have in common. (Diagonal adjacency doesn't count and I'm not clustering empty cells). |-------|-------|------| | Jane | Joe | | | Jack | Jane | | |-------|-------|------| | Jane | Jane | | | | Joe | | |-------|-------|------| | | Jack | Jane | | | Joe | | |-------|-------|------| In the above arrangement of labels distributed over nine elements, the largest cluster is the “Jane” cluster occupying the four upper left cells. What I've Considered: I've considered iterating through every label of every cell in the double array and testing to see if the cell-label combination under inspection can be associated with a preexisting segment. If the element under inspection cannot be associated with a preexisting segment it becomes the first member of a new segment. If the label/cell combination can be associated with a preexisting segment it associates. Of course, to make this method reasonable I'd have to implement an elaborate hashing system. I'd have to keep track of all the cell-label combinations that stand adjacent to preexisting segments and are in the path of the incrementing indices that are iterating through the double array. This hash method would avoid having to iterate through every pixel in every preexisting segment to find an adjacency. Why I Don't Like it: As is, the above algorithm doesn't take into consideration the case where an element in the double array can be associated with two unique segments, one in the horizontal direction and one in the vertical direction. To handle these cases properly, I would need to implement a test for this specific case and then implement a method that will both associate the element under inspection with a segment and then concatenate the two adjacent identical segments. On the whole, this method and the intricate hashing system that it would require feels very inelegant. Additionally, I really only care about finding the large segments in the double array and I'm much more concerned with the speed of this algorithm than with the accuracy of the segmentation, so I'm looking for a better way. I assume there is some stochastic method for doing this that I haven't thought of. Any suggestions?

    Read the article

  • Deploy ASP.Net MVC 2 Applicatiopn to Windows 2008 R2

    - by user325320
    Hi, I have a ASP.Net MVC 2 web site, which can be visited by http://localhost/Admin/ContentMgr/ in ASP.Net Development Server from Visual Studio 2010(RTM Retail). When I try to deploy the site to Windows 2008 R2 , IIS 7.5 , the url always return 404. First, my application pool is running on .Net 4.0, and Integration mode. Second, my IIS do have "HTTP ERROR" and "HTTP Redirection" features on And this is my web.config. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <compilation debug="true" defaultLanguage="c#" targetFramework="4.0"> <assemblies> <add assembly="System.Web.Abstractions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </assemblies> </compilation> <!-- <authentication mode="Forms"> <forms loginUrl="~/Account/LogOn" timeout="2880" /> </authentication> --> <pages> <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> </namespaces> </pages> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true" > <remove name="UrlRoutingModule"/> <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </modules> <handlers> <remove name="MvcHttpHandler" /> <add name="MvcHttpHandler" preCondition="integratedMode" verb="*" path="*.mvc" type="System.Web.Mvc.MvcHttpHandler" /> <add name="UrlRoutingHandler" preCondition="integratedMode" verb="*" path="UrlRouting.axd" type="System.Web.HttpForbiddenHandler, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> </handlers> <httpErrors errorMode="Detailed" /> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0" newVersion="2.0.0.0" /> </dependentAssembly> </assemblyBinding> </runtime> </configuration>

    Read the article

  • Android - Start service on boot

    - by Gady
    From everything I've seen on Stack Exchange and elsewhere, I have everything set up correctly to start an IntentService when Android OS boots. Unfortunately it is not starting on boot, and I'm not getting any errors. Maybe the experts can help... Manifest: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.phx.batterylogger" android:versionCode="1" android:versionName="1.0" android:installLocation="internalOnly"> <uses-sdk android:minSdkVersion="8" /> <uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.BATTERY_STATS" /> <application android:icon="@drawable/icon" android:label="@string/app_name"> <service android:name=".BatteryLogger"/> <receiver android:name=".StartupIntentReceiver"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> </intent-filter> </receiver> </application> </manifest> BroadcastReceiver for Startup: package com.phx.batterylogger; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; public class StartupIntentReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { Intent serviceIntent = new Intent(context, BatteryLogger.class); context.startService(serviceIntent); } } UPDATE: I tried just about all of the suggestions below, and I added logging such as Log.v("BatteryLogger", "Got to onReceive, about to start service"); to the onReceive handler of the StartupIntentReceiver, and nothing is ever logged. So it isn't even making it to the BroadcastReceiver. I think I'm deploying the APK and testing correctly, just running Debug in Eclipse and the console says it successfully installs it to my Xoom tablet at \BatteryLogger\bin\BatteryLogger.apk. Then to test, I reboot the tablet and then look at the logs in DDMS and check the Running Services in the OS settings. Does this all sound correct, or am I missing something? Again, any help is much appreciated.

    Read the article

  • Delphi LoadLibrary Failing to find DLL other directory - any good options?

    - by Chris Thornton
    Two Delphi programs need to load foo.dll, which contains some code that injects a client-auth certificate into a SOAP request. foo.dll resides in c:\fooapp\foo.dll and is normally loaded by c:\fooapp\foo.exe. That works fine. The other program needs the same functionality, but it resides in c:\program files\unwantedstepchild\sadapp.exe. Both aps load the DLL with this code: FOOLib := LoadLibrary('foo.dll'); ... If FOOLib <> 0 then begin FOOProc := GetProcAddress(FOOLib , 'xInjectCert'); FOOProc(myHttpRequest, Data, CertName); end; It works great for foo.exe, as the dll is right there. sadapp.exe fails to load the library, so FOOLib is 0, and the rest never gets called. The sadapp.exe program therefore silently fails to inject the cert, and when we test against production, it the cert is missing, do the connection fails. Obviously, we should have fully-qualified the path to the DLL. Without going into a lot of details, there were aspects of the testing that masked this problem until recently, and now it's basically too late to fix in code, as that would require a full regression test, and there isn't time for that. Since we've painted ourselves into a corner, I need to know if there are any options that I've overlooked. While we can't change the code (for this release), we CAN tweak the installer. I've found that placing c:\fooapp into the path works. As does adding a second copy of foo.dll directly into c:\program files\unwantedstepchild. c:\fooapp\foo.exe will always be running while sadapp.exe is running, so I was hoping that Windows would find it that way, but apparently not. Is there a way to tell Windows that I really want that same DLL? Maybe a manifest or something? This is the sort of "magic bullet" that I'm looking for. I know I can: Modify the windows path, probably in the installer. That's ugly. Add a second copy of the DLL, directly into the unwantedstepchild folder. Also ugly Delay the project while we code and test a proper fix. Unacceptable. Other? Thanks for any guidance, especially with "Other". I understand that this issue is not necessarily specific to Delphi. Thanks!

    Read the article

  • c#: Clean way to fit a collection into a multidimensional array?

    - by Rosarch
    I have an ICollection<MapNode>. Each MapNode has a Position attribute, which is a Point. I want to sort these points first by Y value, then by X value, and put them in a multidimensional array (MapNode[,]). The collection would look something like this: (30, 20) (20, 20) (20, 30) (30, 10) (30, 30) (20, 10) And the final product: (20, 10) (20, 20) (20, 30) (30, 10) (30, 20) (30, 30) Here is the code I have come up with to do it. Is this hideously unreadable? I feel like it's more hacky than it needs to be. private Map createWorldPathNodes() { ICollection<MapNode> points = new HashSet<MapNode>(); Rectangle worldBounds = WorldQueryUtils.WorldBounds(); for (float x = worldBounds.Left; x < worldBounds.Right; x += PATH_NODE_CHUNK_SIZE) { for (float y = worldBounds.Y; y > worldBounds.Height; y -= PATH_NODE_CHUNK_SIZE) { // default is that everywhere is navigable; // a different function is responsible for determining the real value points.Add(new MapNode(true, new Point((int)x, (int)y))); } } int distinctXValues = points.Select(node => node.Position.X).Distinct().Count(); int distinctYValues = points.Select(node => node.Position.Y).Distinct().Count(); IList<MapNode[]> mapNodeRowsToAdd = new List<MapNode[]>(); while (points.Count > 0) // every iteration will take a row out of points { // get all the nodes with the greatest Y value currently in the collection int currentMaxY = points.Select(node => node.Position.Y).Max(); ICollection<MapNode> ythRow = points.Where(node => node.Position.Y == currentMaxY).ToList(); // remove these nodes from the pool we're picking from points = points.Where(node => ! ythRow.Contains(node)).ToList(); // ToList() is just so it is still a collection // put the nodes with max y value in the array, sorting by X value mapNodeRowsToAdd.Add(ythRow.OrderByDescending(node => node.Position.X).ToArray()); } MapNode[,] mapNodes = new MapNode[distinctXValues, distinctYValues]; int xValuesAdded = 0; int yValuesAdded = 0; foreach (MapNode[] mapNodeRow in mapNodeRowsToAdd) { xValuesAdded = 0; foreach (MapNode node in mapNodeRow) { // [y, x] may seem backwards, but mapNodes[y] == the yth row mapNodes[yValuesAdded, xValuesAdded] = node; xValuesAdded++; } yValuesAdded++; } return pathNodes; } The above function seems to work pretty well, but it hasn't been subjected to bulletproof testing yet.

    Read the article

  • Is this an idiomatic way to pass mocks into objects?

    - by Billy ONeal
    I'm a bit confused about passing in this mock class into an implementation class. It feels wrong to have all this explicitly managed memory flying around. I'd just pass the class by value but that runs into the slicing problem. Am I missing something here? Implementation: namespace detail { struct FileApi { virtual HANDLE CreateFileW( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile ) { return ::CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } virtual void CloseHandle(HANDLE handleToClose) { ::CloseHandle(handleToClose); } }; } class File : boost::noncopyable { HANDLE hWin32; boost::scoped_ptr<detail::FileApi> fileApi; public: File( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile, __in detail::FileApi * method = new detail::FileApi() ) { fileApi.reset(method); hWin32 = fileApi->CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } }; namespace detail { struct FileApi { virtual HANDLE CreateFileW( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile ) { return ::CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } virtual void CloseHandle(HANDLE handleToClose) { ::CloseHandle(handleToClose); } }; } class File : boost::noncopyable { HANDLE hWin32; boost::scoped_ptr<detail::FileApi> fileApi; public: File( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile, __in detail::FileApi * method = new detail::FileApi() ) { fileApi.reset(method); hWin32 = fileApi->CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } ~File() { fileApi->CloseHandle(hWin32); } }; Tests: namespace detail { struct MockFileApi : public FileApi { MOCK_METHOD7(CreateFileW, HANDLE(LPCWSTR, DWORD, DWORD, LPSECURITY_ATTRIBUTES, DWORD, DWORD, HANDLE)); MOCK_METHOD1(CloseHandle, void(HANDLE)); }; } using namespace detail; using namespace testing; TEST(Test_File, OpenPassesArguments) { MockFileApi * api = new MockFileApi; EXPECT_CALL(*api, CreateFileW(Eq(L"BozoFile"), Eq(56), Eq(72), Eq(reinterpret_cast<LPSECURITY_ATTRIBUTES>(67)), Eq(98), Eq(102), Eq(reinterpret_cast<HANDLE>(98)))) .Times(1).WillOnce(Return(reinterpret_cast<HANDLE>(42))); File test(L"BozoFile", 56, 72, reinterpret_cast<LPSECURITY_ATTRIBUTES>(67), 98, 102, reinterpret_cast<HANDLE>(98), api); }

    Read the article

  • How best to store Subversion version information in EAR's?

    - by Rene
    When receiving a bug report or an it-doesnt-work message one of my initials questions is always what version? With a different builds being at many stages of testing, planning and deploying this is often a non-trivial question. I the case of releasing Java JAR (ear, jar, rar, war) files I would like to be able to look in/at the JAR and switch to the same branch, version or tag that was the source of the released JAR. How can I best adjust the ant build process so that the version information in the svn checkout remains in the created build? I was thinking along the lines of: adding a VERSION file, but with what content? storing information in the META-INF file, but under what property with which content? copying sources into the result archive added svn:properties to all sources with keywords in places the compiler leaves them be I ended up using the svnversion approach (the accepted anwser), because it scans the entire subtree as opposed to svn info which just looks at the current file / directory. For this I defined the SVN task in the ant file to make it more portable. <taskdef name="svn" classname="org.tigris.subversion.svnant.SvnTask"> <classpath> <pathelement location="${dir.lib}/ant/svnant.jar"/> <pathelement location="${dir.lib}/ant/svnClientAdapter.jar"/> <pathelement location="${dir.lib}/ant/svnkit.jar"/> <pathelement location="${dir.lib}/ant/svnjavahl.jar"/> </classpath> </taskdef> Not all builds result in webservices. The ear file before deployment must remain the same name because of updating in the application server. Making the file executable is still an option, but until then I just include a version information file. <target name="version"> <svn><wcVersion path="${dir.source}"/></svn> <echo file="${dir.build}/VERSION">${revision.range}</echo> </target> Refs: svnrevision: http://svnbook.red-bean.com/en/1.1/re57.html svn info http://svnbook.red-bean.com/en/1.1/re13.html subclipse svn task: http://subclipse.tigris.org/svnant/svn.html svn client: http://svnkit.com/

    Read the article

  • c++ class member functions selected by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched stackoverflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instatiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state info was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice?

    Read the article

  • [NSIS] Custom radio-buttom INI page via Eclipse

    - by Omegazero
    I'm using Eclipse's create InstallOptions menu to create a custom INI page with radio-buttons for repackaging the Blackberry Desktop installer. There are 2 sections for each type: "Internet" and "Enterprise". I need a user to select 1 of the 2 options and depending on their selection, the page will carry over the selection chosen in the custom page, jump to the INSTFILES page, and continue onto the end. I couldn't find any concrete documentation on getting INI pages to load in the script (I'm probably searching incorrectly), and then pass data from one page to the next (according to fields I guess?) Any help is appreciated. Even if it's to tell me I'm blind and can't read a doc (though a link would help :) ) Here's the INI code: ; Auto-generated by EclipseNSIS InstallOptions Script Wizard ; Jul 29, 2009 5:42:56 PM [Settings] NumFields=7 Title=RIM BlackBerry Desktop 5.0 installation CancelEnabled=1 [Field 1] Type=RadioButton Left=15 Top=28 Right=100 Bottom=38 Text=Internet State= Flags=NOTIFY [Field 4] Type=RadioButton Left=15 Top=95 Right=100 Bottom=105 Text=Enterprise Flags=NOTIFY [Field 2] Type=GroupBox Left=0 Top=10 Right=300 Bottom=75 Text= [Field 5] Type=Label Left=30 Top=42 Right=235 Bottom=52 Text=For users who are NOT on the Enterprise (Exchange) server [Field 6] Type=Label Left=30 Top=111 Right=235 Bottom=121 Text=Choose this only if you are on the Exchange server [Field 3] Type=GroupBox Left=0 Top=75 Right=300 Bottom=140 [Field 7] Type=Label Left=0 Top=0 Right=130 Bottom=10 Text=Please choose your installation method ...And here's the NSI code: Auto-generated by EclipseNSIS Script Wizard Jul 29, 2009 5:42:16 PM Name "BlackBerry Desktop" RequestExecutionLevel admin General Symbol Definitions !define VERSION 5.0.0.11 !define COMPANY RIM !define URL http://www.blackberry.com MUI Symbol Definitions !define MUI_ICON BBD.ico !define MUI_LICENSEPAGE_RADIOBUTTONS Included files !include Sections.nsh !include MUI2.nsh Reserved Files ReserveFile "${NSISDIR}\Plugins\AdvSplash.dll" Installer pages !insertmacro MUI_PAGE_WELCOME !insertmacro MUI_PAGE_LICENSE license.txt !insertmacro MUI_PAGE_COMPONENTS !insertmacro MUI_PAGE_INSTFILES !insertmacro MUI_PAGE_FINISH Installer languages !insertmacro MUI_LANGUAGE English Installer attributes OutFile RIM_BlackBerry_Desktop_5.0.exe InstallDir "$TEMP\RIM BlackBerry Desktop 5.0 Setup Files" CRCCheck on XPStyle on ShowInstDetails hide VIProductVersion 5.0.0.11 VIAddVersionKey /LANG=${LANG_ENGLISH} ProductName "BlackBerry Desktop" VIAddVersionKey /LANG=${LANG_ENGLISH} ProductVersion "${VERSION}" VIAddVersionKey /LANG=${LANG_ENGLISH} CompanyName "${COMPANY}" VIAddVersionKey /LANG=${LANG_ENGLISH} CompanyWebsite "${URL}" VIAddVersionKey /LANG=${LANG_ENGLISH} FileVersion "${VERSION}" VIAddVersionKey /LANG=${LANG_ENGLISH} FileDescription "" VIAddVersionKey /LANG=${LANG_ENGLISH} LegalCopyright "" Installer sections Section /o Main SEC0000 SetOutPath $INSTDIR SetOverwrite ifdiff ; TESTING PHASE SectionEnd SectionGroup /e "BlackBerry Desktop Section" Section /o Internet SEC0001 SetOutPath $INSTDIR\DRIVERS SetOverwrite ifdiff ; Execwait 'msiexec /i "$INSTDIR\BlackBerry USB and Modem Drivers_ENG (DM5.0b28).msi" /passive' SetOutPath $INSTDIR SetOverwrite ifdiff ; File /r * ; ExecWait '"$INSTDIR\Setup.exe" /S/v/qb!' SectionEnd Section /o Enterprise SEC0002 SetOutPath $INSTDIR\DRIVERS SetOverwrite ifdiff ; Execwait 'msiexec /i "$INSTDIR\BlackBerry USB and Modem Drivers_ENG (DM5.0b28).msi" /passive' SetOutPath $INSTDIR SetOverwrite ifdiff ; File /r * ; Delete /REBOOTOK "$INSTDIR\Setup.ini" ; Rename /REBOOTOK "$INSTDIR\Setup_Enterprise.ini" "$INSTDIR\Setup.ini" ; ExecWait '"$INSTDIR\Setup.exe" /S/v/qb!' SectionEnd SectionGroupEnd Section Descriptions !insertmacro MUI_FUNCTION_DESCRIPTION_BEGIN !insertmacro MUI_DESCRIPTION_TEXT ${SEC0000} $(SEC0000_DESC) !insertmacro MUI_DESCRIPTION_TEXT ${SEC0001} $(SEC0001_DESC) !insertmacro MUI_FUNCTION_DESCRIPTION_END Installer Language Strings TODO Update the Language Strings with the appropriate translations. LangString SEC0000_DESC ${LANG_ENGLISH} "Installation for non-Exchange/Enterprise BlackBerry Users" LangString SEC0001_DESC ${LANG_ENGLISH} "Installation for Exchange/Enterprise BlackBerry Users"

    Read the article

  • Not able to show video with html5

    - by shin
    I am testing html 5 video tag. I am using http://www.kaltura.org/project/HTML5_Video_Media_JavaScript_Library and http://camendesign.co.uk/. I downloaded the creative common video. When I use an external link, it plays the video. So I uploaded the video to my server but it does not play. It asks if I want to save it or asking an application to play. When I go to the external link, http://cdn.kaltura.org/apis/html5lib/kplayer-examples/media/bbb400p.ogv, it plays it on the browser automatically. I also tested locally, but it does not play either. I am hoping someone gives me why and how to solve the problem. This code works. <figure> <video id="vid1" width="500" height="300" style="position:absolute" poster="http://cdn.kaltura.org/apis/html5lib/kplayer-examples/media/bbb480.jpg" durationHint="33" controls = "true"> <source src="http://cdn.kaltura.org/apis/html5lib/kplayer-examples/media/bbb400p.ogv" /> <source src="http://cdn.kaltura.org/apis/html5lib/kplayer-examples/media/bbb_trailer_iphone.m4v"/> </video> </figure> This does not. <figure> <video id="vid1" width="500" height="300" style="position:absolute" poster="http://cdn.kaltura.org/apis/html5lib/kplayer-examples/media/bbb480.jpg" durationHint="33" controls = "true"> <source src="http://www.mywebsite.com/media/bbb400p.ogv" /> <source src="http://www.mywebsite.com/media/bbb_trailer_iphone.m4v"/> </video> </figure> This does not work either. <figure> <video id="vid1" width="500" height="300" style="position:absolute" poster="http://cdn.kaltura.org/apis/html5lib/kplayer-examples/media/bbb480.jpg" durationHint="33" controls = "true"> <source src="http://127.0.0.1/html5videotest/media/bbb400p.ogv" /> <source src="http://127.0.0.1/html5videotest/media/bbb_trailer_iphone.m4v"/> </video> </figure>

    Read the article

  • Evidence-Based-Scheduling - are estimations only as accurate as the work-plan they're based on?

    - by Assaf Lavie
    I've been using FogBugz's Evidence Based Scheduling (for the uninitiated, Joel explains) for a while now and there's an inherent problem I can't seem to work around. The system is good at telling me the probability that a given project will be delivered at some date, given the detailed list of tasks that comprise the project. However, it does not take into account the fact that during development additional tasks always pop up. Now, there's the garbage-can approach of creating a generic task/scheduled-item for "last minute hacks" or "integration tasks", or what have you, but that clearly goes against the idea of aggregating the estimates of many small cases. It's often the case that during the development stage of a project you realize that there's a whole area your planning didn't cover, because, well, that's the nature of developing stuff that hasn't been developed before. So now your ~3 month project may very well turn into a 6 month project, but not because your estimations were off (you could be the best estimator in the world, for those task the comprised your initial work plan); rather because you ended up adding a whole bunch of new tasks that weren't there to begin with. EBS doesn't help you with that. It could, theoretically (I guess). It could, perhaps, measure the amount of work you add to a project over time and take that into consideration when estimating the time remaining on a given project. Just a thought. In other words, EBS works on a task basis, but not on a project/release basis - but the latter is what's important. It's what your boss typically cares about - delivery date, not the time it takes to finish each task along the way, and not the time it would have taken, if your planning was perfect. So the question is (yes, there's a question here, don't close it): What's your methodology when it comes to using EBS in FogBugz and how do you solve the problem above, which seems to be a main cause of schedule delays and mispredictions? Edit Some more thoughts after reading a few answers: If it comes down to having to choose which delivery date you're comfortable presenting to your higher-ups by squinting at the delivery-probability graph and choosing 80%, or 95%, or 60% (based on what, exactly?) then we've resorted to plain old buffering/factoring of our estimates. In which case, couldn't we have skipped the meticulous case by case hour-sized estimation effort step? By forcing ourselves to break down tasks that take more than a day into smaller chunks of work haven't we just deluded ourselves into thinking our planning is as tight and thorough as it could be? People may be consistently bad estimators that do not even learn from their past mistakes. In that respect, having an EBS system is certainly better than not having one. But what can we do about the fact that we're not that good in planning as well? I'm not sure it's a problem that can be solved by a similar system. Our estimates are wrong because of tendencies to be overly optimistic/pessimistic about certain tasks, and because of neglect to account for systematic delays (e.g. sick days, major bug crisis) - and usually not because we lack knowledge about the work that needs to be done. Our planning, on the other hand, is often incomplete because we simply don't have enough knowledge in this early stage; and I don't see how an EBS-like system could fill that gap. So we're back to methodology. We need to find a way to accommodate bad or incomplete work plans that's better than voodoo-multiplication.

    Read the article

  • DriverManager always returns my custom driver regardless of the connection URL

    - by JGB146
    I am writing a driver to act as a wrapper around two separate MySQL connections (to distributed databases). Basically, the goal is to enable interaction with my driver for all applications instead of requiring the application to sort out which database holds the desired data. Most of the code for this is in place, but I'm having a problem in that when I attempt to create connections via the MySQL Driver, the DriverManager is returning an instance of my driver instead of the MySQL Driver. I'd appreciate any tips on what could be causing this and what could be done to fix it! Below is a few relevant snippets of code. I can provide more, but there's a lot, so I'd need to know what else you want to see. First, from MyDriver.java: public MyDriver() throws SQLException { DriverManager.registerDriver(this); } public Connection connect(String url, Properties info) throws SQLException { try { return new MyConnection(info); } catch (Exception e) { return null; } } public boolean acceptsURL(String url) throws SQLException { if (url.contains("jdbc:jgb://")) { return true; } return false; } It is my understanding that this acceptsURL function will dictate whether or not the DriverManager deems my driver a suitable fit for a given URL. Hence it should only be passing connections from my driver if the URL contains "jdbc:jgb://" right? Here's code from MyConnection.java: Connection c1 = null; Connection c2 = null; /** *Constructors */ public DDBSConnection (Properties info) throws SQLException, Exception { info.list(System.out); //included for testing Class.forName("com.mysql.jdbc.Driver").newInstance(); String url1 = "jdbc:mysql://server1.com/jgb"; String url2 = "jdbc:mysql://server2.com/jgb"; this.c1 = DriverManager.getConnection( url1, info.getProperty("username"), info.getProperty("password")); this.c2 = DriverManager.getConnection( url2, info.getProperty("username"), info.getProperty("password")); } And this tells me two things. First, the info.list() call confirms that the correct user and password are being sent. Second, because we enter an infinite loop, we see that the DriverManager is providing new instances of my connection as matches for the mysql URLs instead of the desired mysql driver/connection. FWIW, I have separately tested implementations that go straight to the mysql driver using this exact syntax (al beit only one at a time), and was able to successfully interact with each database individually from a test application outside of my driver.

    Read the article

  • How to view ASMX SOAP using Fiddler2?

    - by outer join
    Does anyone know if Fiddler can display the raw SOAP messages for ASMX web services? I'm testing a simple web service using both Fiddler2 and Storm and the results vary (Fiddler shows plain xml while Storm shows the SOAP messages). See sample request/responses below: Fiddler2 Request: POST /webservice1.asmx/Test HTTP/1.1 Accept: */* Referer: http://localhost.:4164/webservice1.asmx?op=Test Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2; MS-RTC LM 8) Content-Type: application/x-www-form-urlencoded Accept-Encoding: gzip, deflate Host: localhost.:4164 Content-Length: 0 Connection: Keep-Alive Pragma: no-cache Fiddler2 Response: HTTP/1.1 200 OK Server: ASP.NET Development Server/9.0.0.0 Date: Thu, 21 Jan 2010 14:21:50 GMT X-AspNet-Version: 2.0.50727 Cache-Control: private, max-age=0 Content-Type: text/xml; charset=utf-8 Content-Length: 96 Connection: Close <?xml version="1.0" encoding="utf-8"?> <string xmlns="http://tempuri.org/">Hello World</string> Storm Request (body only): <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <Test xmlns="http://tempuri.org/" /> </soap:Body> </soap:Envelope> Storm Response: Status Code: 200 Content Length : 339 Content Type: text/xml; charset=utf-8 Server: ASP.NET Development Server/9.0.0.0 Status Description: OK <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <TestResponse xmlns="http://tempuri.org/"> <TestResult>Hello World</TestResult> </TestResponse> </soap:Body> </soap:Envelope> Thanks for any help.

    Read the article

  • Why aren't min-width and max-width working as I expect?

    - by Nathan Long
    I'm trying to adjust a CSS page layout using min-width and max-width. To simplify the problem, I made this test page. I'm trying it out in the latest versions of Firefox and Chrome with the same results. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Testing min-width and max-width</title> <style type="text/css"> div{float: left; max-width: 400px; min-width: 200px;} div.a{background: orange;} div.b{background: gray;} </style> </head> <body> <div class="a"> (Giant block of filler text here) </div> <div class="b"> (Giant block of filler text here) </div> </body> </html> Here's what I expect to happen: With the browser maximized, the divs sit side by side, each 400px wide: their maximum width Shrink the browser window, and they both shrink to 200px: their minimum width Further shrinking the browser has no effect on them Here's what actually happens, starting at step 2: Shrink the browser window, and as soon as they can't sit side-by-side at their max width, the second div drops below the first Further shrinking the browser makes them get narrower and narrower, as small as I can make the window So here's are my questions: What does max-width mean if the element will sooner hop down in the layout than go lower than its maximum width? What does min-width mean if the element will happily get narrower than that if the browser window keeps shrinking? Is there any way to achieve what I want: have these elements sit side-by-side, happily shrinking until they reach 200px each, and only then adjust the layout so that the second one drops down? And of course... What am I doing wrong?

    Read the article

  • Prevent SQL Injection in Dynamic column names

    - by Mr Shoubs
    I can't get away without writing some dynamic sql conditions in a part of my system (using Postgres). My question is how best to avoid SQL Injection with the method I am currently using. EDIT (Reasoning): There are many of columns in a number of tables (a number which grows (only) and is maintained elsewhere). I need a method of allowing the user to decide which (predefined) column they want to query (and if necessary apply string functions to). The query itself is far too complex for the user to write themselves, nor do they have access to the db. There are 1000's of users with varying requirements and I need to remain as flexible as possible - I shouldn't have to revisit the code unless the main query needs to change - Also, there is no way of knowing what conditions the user will need to use. I have objects (received via web service) that generates a condition (the generation method is below - it isn't perfect yet) for some large sql queries. The _FieldName is user editable (parameter name was, but it didn't need to be) and I am worried it could be an attack vector. I put double quotes (see quoted identifier) around the field name in an attempt to sanitize the string, this way it can never be a key word. I could also look up the field name against a list of fields, but it would be difficult to maintain on a timely basis. Unfortunately the user must enter the condition criteria, I am sure there must be more I can add to the sanatize method? and does quoting the column name make it safe? (my limited testing seems to think so). an example built condition would be "AND upper(brandloaded.make) like 'O%' and upper(brandloaded.make) not like 'OTHERBRAND'" ... Any help or suggestions are appreciated. Public Function GetCondition() As String Dim sb As New Text.StringBuilder 'put quote around the table name in an attempt to prevent some sql injection 'http://www.postgresql.org/docs/8.2/static/sql-syntax-lexical.html sb.AppendFormat(" {0} ""{1}"" ", _LogicOperator.ToString, _FieldName) Select Case _ConditionOperator Case ConditionOperatorOptions.Equals sb.Append(" = ") ... End Select sb.AppendFormat(" {0} ", Me.UniqueParameterName) 'for parameter Return Me.Sanitize(sb) End Function Private Function Sanitize(ByVal sb As Text.StringBuilder) As String 'compare against a similar blacklist mentioned here: http://forums.asp.net/t/1254125.aspx sb.Replace(";", "") sb.Replace("'", "") sb.Replace("\", "") sb.Replace(Chr(8), "") Return sb.ToString End Function Public ReadOnly Property UniqueParameterName() As String Get Return String.Concat(":" _UniqueIdentifier) End Get End Property

    Read the article

  • CLR 4.0 inlining policy? (maybe bug with MethodImplOptions.NoInlining)

    - by ControlFlow
    I've testing some new CLR 4.0 behavior in method inlining (cross-assembly inlining) and found some strage results: Assembly ClassLib.dll: using System.Diagnostics; using System; using System.Reflection; using System.Security; using System.Runtime.CompilerServices; namespace ClassLib { public static class A { static readonly MethodInfo GetExecuting = typeof(Assembly).GetMethod("GetExecutingAssembly"); public static Assembly Foo(out StackTrace stack) // 13 bytes { // explicit call to GetExecutingAssembly() stack = new StackTrace(); return Assembly.GetExecutingAssembly(); } public static Assembly Bar(out StackTrace stack) // 25 bytes { // reflection call to GetExecutingAssembly() stack = new StackTrace(); return (Assembly) GetExecuting.Invoke(null, null); } public static Assembly Baz(out StackTrace stack) // 9 bytes { stack = new StackTrace(); return null; } public static Assembly Bob(out StackTrace stack) // 13 bytes { // call of non-inlinable method! return SomeSecurityCriticalMethod(out stack); } [SecurityCritical, MethodImpl(MethodImplOptions.NoInlining)] static Assembly SomeSecurityCriticalMethod(out StackTrace stack) { stack = new StackTrace(); return Assembly.GetExecutingAssembly(); } } } Assembly ConsoleApp.exe using System; using ClassLib; using System.Diagnostics; class Program { static void Main() { Console.WriteLine("runtime: {0}", Environment.Version); StackTrace stack; Console.WriteLine("Foo: {0}\n{1}", A.Foo(out stack), stack); Console.WriteLine("Bar: {0}\n{1}", A.Bar(out stack), stack); Console.WriteLine("Baz: {0}\n{1}", A.Baz(out stack), stack); Console.WriteLine("Bob: {0}\n{1}", A.Bob(out stack), stack); } } Results: runtime: 4.0.30128.1 Foo: ClassLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null at ClassLib.A.Foo(StackTrace& stack) at Program.Main() Bar: ClassLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null at ClassLib.A.Bar(StackTrace& stack) at Program.Main() Baz: at Program.Main() Bob: ClassLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null at Program.Main() So questions are: Why JIT does not inlined Foo and Bar calls as Baz does? They are lower than 32 bytes of IL and are good candidates for inlining. Why JIT inlined call of Bob and inner call of SomeSecurityCriticalMethod that is marked with the [MethodImpl(MethodImplOptions.NoInlining)] attribute? Why GetExecutingAssembly returns a valid assembly when is called by inlined Baz and SomeSecurityCriticalMethod methods? I've expect that it performs the stack walk to detect the executing assembly, but stack will contains only Program.Main() call and no methods of ClassLib assenbly, to ConsoleApp should be returned.

    Read the article

  • WCF Service Library - make calls from Console App

    - by inutan
    Hello there, I have a WCF Service Library with netTcpBinding. Its app.config as follows: <configuration> <system.serviceModel> <bindings> <netTcpBinding> <binding name="netTcp" maxBufferPoolSize="50000000" maxReceivedMessageSize="50000000"> <readerQuotas maxDepth="500" maxStringContentLength="50000000" maxArrayLength="50000000" maxBytesPerRead="50000000" maxNameTableCharCount="50000000" /> <security mode="None"></security> </binding> </netTcpBinding> </bindings> <services> <service behaviorConfiguration="ReportingComponentLibrary.TemplateServiceBehavior" name="ReportingComponentLibrary.TemplateReportService"> <endpoint address="TemplateService" binding="netTcpBinding" bindingConfiguration="netTcp" contract="ReportingComponentLibrary.ITemplateService"></endpoint> <endpoint address="ReportService" binding="netTcpBinding" bindingConfiguration="netTcp" contract="ReportingComponentLibrary.IReportService"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" ></endpoint> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:8001/TemplateReportService" /> <add baseAddress ="http://localhost:8080/TemplateReportService" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior name="ReportingComponentLibrary.TemplateServiceBehavior"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> I want to call it from a console application for testing purpose. I understand that I can call by adding Service Reference or by adding proxy using svcutil. But in both these cases, my service needs to be up and running (I used WCF Test Client) Is there any other way I can call and test service method from console application?

    Read the article

  • iphone Odd Problem when using a custom cell

    - by Brodie4598
    Please note where I have the NSLOG. All it is displaying in the log is the first three items in the nameSection. After some testing, I discovered it is displaying how many keys there are because if I add a key to the plist, it will log a fourth item in log. nameSection should be an array of the strings that make up the key array in the plist file. the plist file has 3 dictionaries, each with several arrays of strings. The code picks the dictionary I am working with correctly, then should use the array names as sections in the table and the strings en each array as what to display in each cell. so if the dictionary i am working with has 3 arrays, NSLOG will display 3 strings from the first array: 2010-05-01 17:03:26.957 Checklists[63926:207] string0 2010-05-01 17:03:26.960 Checklists[63926:207] string1 2010-05-01 17:03:26.962 Checklists[63926:207] string2 then stop with: 2010-05-01 17:03:26.963 Checklists[63926:207] * Terminating app due to uncaught exception 'NSRangeException', reason: '* -[NSCFArray objectAtIndex:]: index (3) beyond bounds (3)' if i added an array to the dictionary, it log 4 items instead of 3. I hope this explanation makes sense... -(NSInteger)numberOfSectionsInTableView:(UITableView *)tableView{ return [keys count]; } -(NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger) section { NSString *key = [keys objectAtIndex:section]; NSArray *nameSection = [names objectForKey:key]; return [nameSection count]; } -(UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { NSUInteger section = [indexPath section]; NSString *key = [keys objectAtIndex: section]; NSArray *nameSection = [names objectForKey:key]; static NSString *SectionsTableIdentifier = @"SectionsTableIdentifier"; static NSString *ChecklistCellIdentifier = @"ChecklistCellIdentifier "; ChecklistCell *cell = (ChecklistCell *)[tableView dequeueReusableCellWithIdentifier: SectionsTableIdentifier]; if (cell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"ChecklistCell" owner:self options:nil]; for (id oneObject in nib) if ([oneObject isKindOfClass:[ChecklistCell class]]) cell = (ChecklistCell *)oneObject; } NSUInteger row = [indexPath row]; NSDictionary *rowData = [self.keys objectAtIndex:row]; NSString *tempString = [[NSString alloc]initWithFormat:@"%@",[nameSection objectAtIndex:row]]; NSLog(@"%@",tempString); cell.colorLabel.text = [tempArray objectAtIndex:0]; cell.nameLabel.text = [tempArray objectAtIndex:1]; return cell; return cell; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = [tableView cellForRowAtIndexPath:indexPath]; if (cell.accessoryType == UITableViewCellAccessoryNone) { cell.accessoryType = UITableViewCellAccessoryCheckmark; } else if (cell.accessoryType == UITableViewCellAccessoryCheckmark) { cell.accessoryType = UITableViewCellAccessoryNone; } [tableView deselectRowAtIndexPath:indexPath animated:NO]; } -(NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section{ NSString *key = [keys objectAtIndex:section]; return key; }

    Read the article

  • Why does Git.pm on cygwin complain about 'Out of memory during "large" request?

    - by Charles Ma
    Hi, I'm getting this error while doing a git svn rebase in cygwin Out of memory during "large" request for 268439552 bytes, total sbrk() is 140652544 bytes at /usr/lib/perl5/site_perl/Git.pm line 898, <GEN1> line 3. 268439552 is 256MB. Cygwin's maxium memory size is set to 1024MB so I'm guessing that it has a different maximum memory size for perl? How can I increase the maximum memory size that perl programs can use? update: This is where the error occurs (in Git.pm): while (1) { my $bytesLeft = $size - $bytesRead; last unless $bytesLeft; my $bytesToRead = $bytesLeft < 1024 ? $bytesLeft : 1024; my $read = read($in, $blob, $bytesToRead, $bytesRead); //line 898 unless (defined($read)) { $self->_close_cat_blob(); throw Error::Simple("in pipe went bad"); } $bytesRead += $read; } I've added a print before line 898 to print out $bytesToRead and $bytesRead and the result was 1024 for $bytesToRead, and 134220800 for $bytesRead, so it's reading 1024 bytes at a time and it has already read 128MB. Perl's 'read' function must be out of memory and is trying to request for double it's memory size...is there a way to specify how much memory to request? or is that implementation dependent? UPDATE2: While testing memory allocation in cygwin: This C program's output was 1536MB int main() { unsigned int bit=0x40000000, sum=0; char *x; while (bit > 4096) { x = malloc(bit); if (x) sum += bit; bit >>= 1; } printf("%08x bytes (%.1fMb)\n", sum, sum/1024.0/1024.0); return 0; } While this perl program crashed if the file size is greater than 384MB (but succeeded if the file size was less). open(F, "<400") or die("can't read\n"); $size = -s "400"; $read = read(F, $s, $size); The error is similar Out of memory during "large" request for 536875008 bytes, total sbrk() is 217088 bytes at mem.pl line 6.

    Read the article

  • Is it possible to reliably auto-decode user files to Unicode? [C#]

    - by NVRAM
    I have a web application that allows users to upload their content for processing. The processing engine expects UTF8 (and I'm composing XML from multiple users' files), so I need to ensure that I can properly decode the uploaded files. Since I'd be surprised if any of my users knew their files even were encoded, I have very little hope they'd be able to correctly specify the encoding (decoder) to use. And so, my application is left with task of detecting before decoding. This seems like such a universal problem, I'm surprised not to find either a framework capability or general recipe for the solution. Can it be I'm not searching with meaningful search terms? I've implemented BOM-aware detection (http://en.wikipedia.org/wiki/Byte_order_mark) but I'm not sure how often files will be uploaded w/o a BOM to indicate encoding, and this isn't useful for most non-UTF files. My questions boil down to: Is BOM-aware detection sufficient for the vast majority of files? In the case where BOM-detection fails, is it possible to try different decoders and determine if they are "valid"? (My attempts indicate the answer is "no.") Under what circumstances will a "valid" file fail with the C# encoder/decoder framework? Is there a repository anywhere that has a multitude of files with various encodings to use for testing? While I'm specifically asking about C#/.NET, I'd like to know the answer for Java, Python and other languages for the next time I have to do this. So far I've found: A "valid" UTF-16 file with Ctrl-S characters has caused encoding to UTF-8 to throw an exception (Illegal character?) (That was an XML encoding exception.) Decoding a valid UTF-16 file with UTF-8 succeeds but gives text with null characters. Huh? Currently, I only expect UTF-8, UTF-16 and probably ISO-8859-1 files, but I want the solution to be extensible if possible. My existing set of input files isn't nearly broad enough to uncover all the problems that will occur with live files. Although the files I'm trying to decode are "text" I think they are often created w/methods that leave garbage characters in the files. Hence "valid" files may not be "pure". Oh joy. Thanks.

    Read the article

  • Closure Tables - Is this enough data to display a tree view?

    - by James Pitt
    Here is the table I have created by testing the closure table method. | id | parentId | childId | hops | | | | | 270 | 6 | 6 | 0 | 271 | 7 | 7 | 0 | 272 | 8 | 8 | 0 | 273 | 9 | 9 | 0 | 276 | 10 | 10 | 0 | 281 | 9 | 10 | 1 | 282 | 7 | 9 | 1 | 283 | 7 | 10 | 2 | 285 | 7 | 8 | 1 | 286 | 6 | 7 | 1 | 287 | 6 | 9 | 2 | 288 | 6 | 10 | 3 | 289 | 6 | 8 | 2 | 293 | 6 | 9 | 1 | 294 | 6 | 10 | 2 I am trying to create a simple tree of this using PHP. There does not seem to be enough data to create the table. For example, when I look purely at parentId = 6: -Part 6 -Part 7 - ? - ? -Part 9 - ? - ? We know that parts 8 and 10 exists below Part 7 or 9, but not which. We know that part 10 exists at both 3 and 4 nodes deep but where? If I look at other data in the table it is possible to tell it should be: - Part 6 - Part 7 - Part 9 - Part 10 - Part 9 - Part 10 I thought one of the benefits of closure tables was there was no need for recursive queries? Could you help explain what I am doing wrong? EDIT: For clarification, this is a mapping table. There is another table called "parts" which has a column called part_id that correlates to both the parentId and childId columns in the "closure" table. The "id" column in the table above (closure) is just for the purposes of maintaining a primary key. It is not really necessary. The methods I have used to create this closure table is described in the following article: http://dirtsimple.org/2010/11/simplest-way-to-do-tree-based-queries.html EDIT2: It can have two and three hops. I will explain easier by assigning names to the items. Part 6 = Bicycle Part 7 = Gears Part 8 = Chain Part 9 = Bolt Part 10 = Nut Nut is part of Bolt. The Bolt and Nut combo exists directly within Bicycle and within Gears which is part of Bicycle. In relation to what method to use I have looked at Adjacency, Edges, Enum Paths, Closures, DAGS(networks) and the Nested Set Model. I am still trying to work out what is what, but this is an extremely complex component database where there are multiple parents and any modification to a sub-tree must propogate through the other trees. More importantly there will be insertions, deletions and tree views that I wish to avoid recursion during general use, even at the cost of database space and query time during entry.

    Read the article

  • drag to pan on an UserControl

    - by Matías
    Hello, I'm trying to build my own "PictureBox like" control adding some functionalities. For example, I want to be able to pan over a big image by simply clicking and dragging with the mouse. The problem seems to be on my OnMouseMove method. If I use the following code I get the drag speed and precision I want, but of course, when I release the mouse button and try to drag again the image is restored to its original position. using System.Drawing; using System.Windows.Forms; namespace Testing { public partial class ScrollablePictureBox : UserControl { private Image image; private bool centerImage; public Image Image { get { return image; } set { image = value; Invalidate(); } } public bool CenterImage { get { return centerImage; } set { centerImage = value; Invalidate(); } } public ScrollablePictureBox() { InitializeComponent(); SetStyle(ControlStyles.AllPaintingInWmPaint | ControlStyles.OptimizedDoubleBuffer, true); Image = null; AutoScroll = true; AutoScrollMinSize = new Size(0, 0); } private Point clickPosition; private Point scrollPosition; protected override void OnMouseDown(MouseEventArgs e) { base.OnMouseDown(e); clickPosition.X = e.X; clickPosition.Y = e.Y; } protected override void OnMouseMove(MouseEventArgs e) { base.OnMouseMove(e); if (e.Button == MouseButtons.Left) { scrollPosition.X = clickPosition.X - e.X; scrollPosition.Y = clickPosition.Y - e.Y; AutoScrollPosition = scrollPosition; } } protected override void OnPaint(PaintEventArgs e) { base.OnPaint(e); e.Graphics.FillRectangle(new Pen(BackColor).Brush, 0, 0, e.ClipRectangle.Width, e.ClipRectangle.Height); if (Image == null) return; int centeredX = AutoScrollPosition.X; int centeredY = AutoScrollPosition.Y; if (CenterImage) { //Something not relevant } AutoScrollMinSize = new Size(Image.Width, Image.Height); e.Graphics.DrawImage(Image, new RectangleF(centeredX, centeredY, Image.Width, Image.Height)); } } } But if I modify my OnMouseMove method to look like this: protected override void OnMouseMove(MouseEventArgs e) { base.OnMouseMove(e); if (e.Button == MouseButtons.Left) { scrollPosition.X += clickPosition.X - e.X; scrollPosition.Y += clickPosition.Y - e.Y; AutoScrollPosition = scrollPosition; } } ... you will see that the dragging is not smooth as before, and sometimes behaves weird (like with lag or something). What am I doing wrong? I've also tried removing all "base" calls on a desperate movement to solve this issue, haha, but again, it didn't work. Thanks for your time.

    Read the article

  • Getting <divs>'s to align next to each other

    - by user1322845
    I have the following code I am trying to get working correctly: <div id="newspost_bg"> <article> <p> <header><h3>The fast red fox!</h3></header> This is where the article resides in the article tag. This is good for SEO optimization. <footer>Read More..</footer> </p> </article> </div> <div id="newspost_bg"> hello </div> <div id="newspost_bg"> hello </div> <div id="advertisement"> <script type="text/javascript"><!-- google_ad_client = "ca-pub-2139701283631933"; /* testing site */ google_ad_slot = "4831288817"; google_ad_width = 120; google_ad_height = 600; //--> </script> </div> Here is the css that goes with it: #newspost_bg{ position: relative; background-color: #d9dde1; width:700px; height:250px; margin: 10px; margin-left: 20px; border: solid 10px #1d2631; float:left; } #newspost_bg article{ position: relative; margin-left: 20px; } #advertisement{ float: left; background-color: #d9dde1; width: 125px; height: 605px; margin: 10px; } The problem I'm experiencing is that the advertisements im trying to get setup will align with the last with the id of newspost_bg but im looking to havce it align to the top of the container it is in. I dont know if this is enough info, if not please let me know what you might need. Im new to the web coding scene so any and all critiques help me.

    Read the article

  • Performance of SHA-1 Checksum from Android 2.2 to 2.3 and Higher

    - by sbrichards
    In testing the performance of: package com.srichards.sha; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; import java.io.IOException; import java.io.InputStream; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.util.zip.ZipEntry; import java.util.zip.ZipFile; import com.srichards.sha.R; public class SHAHashActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); TextView tv = new TextView(this); String shaVal = this.getString(R.string.sha); long systimeBefore = System.currentTimeMillis(); String result = shaCheck(shaVal); long systimeResult = System.currentTimeMillis() - systimeBefore; tv.setText("\nRunTime: " + systimeResult + "\nHas been modified? | Hash Value: " + result); setContentView(tv); } public String shaCheck(String shaVal){ try{ String resultant = "null"; MessageDigest digest = MessageDigest.getInstance("SHA1"); ZipFile zf = null; try { zf = new ZipFile("/data/app/com.blah.android-1.apk"); // /data/app/com.blah.android-2.apk } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } ZipEntry ze = zf.getEntry("classes.dex"); InputStream file = zf.getInputStream(ze); byte[] dataBytes = new byte[32768]; //65536 32768 int nread = 0; while ((nread = file.read(dataBytes)) != -1) { digest.update(dataBytes, 0, nread); } byte [] rbytes = digest.digest(); StringBuffer sb = new StringBuffer(""); for (int i = 0; i< rbytes.length; i++) { sb.append(Integer.toString((rbytes[i] & 0xff) + 0x100, 16).substring(1)); } if (shaVal.equals(sb.toString())) { resultant = ("\nFalse : " + "\nFound:\n" + sb.toString() + "|" + "\nHave:\n" + shaVal); } else { resultant = ("\nTrue : " + "\nFound:\n" + sb.toString() + "|" + "\nHave:\n" + shaVal); } return resultant; } catch (IOException e) { e.printStackTrace(); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } return null; } } On a 2.2 Device I get average runtime of ~350ms, while on newer devices I get runtimes of 26-50ms which is substantially lower. I'm keeping in mind these devices are newer and have better hardware but am also wondering if the platform and the implementation affect performance much and if there is anything that could reduce runtimes on 2.2 devices. Note, the classes.dex of the .apk being accessed is roughly 4MB. Thanks!

    Read the article

< Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >