Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 1574/1620 | < Previous Page | 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581  | Next Page >

  • Possible to use Python with Intel's Atom Developer SDK (C/C++)?

    - by Jordan Magnuson
    So I've made a game in Python and PyGame. Now I'm interested in submitting the game to Intel's March Developer Challenge. However, the developer challenge requires use of Intel's Atom Developer SDK (http://appdeveloper.intel.com/en-us/sdk), which only has API's for C and C++. I'm new to Python and PyGame, and have no experience in C or C++. My question is, would it be possible to somehow implement Intel's Atom SDK through/with/from a Python application (as the first link above suggests)? I've read up a little bit on embedding/extending Python into/with C, but I'm not entirely sure what to embed or where. I mean, I know I can do things like this in C: #include <Python.h> int main(int argc, char *argv[]) { Py_Initialize(); PyRun_SimpleString("from time import time,ctime\n" "print 'Today is',ctime(time())\n"); Py_Finalize(); return 0; } But what do I do about all my dependencies on Python and Pygame, for people that don't have those installed on their machines? Normally Py2Exe takes care of compacting the required dependencies (I've managed to package my game into an exe/zip), but how do I take care of that stuff in the context of embedding within C? Can I somehow work with py2exe on this, or do I need to do something entirely different for embedding within C? It seems like it would be a lot easier to go the route of extending Python with the C validation code, rather than trying to embed my whole game within C, but I think that's not an option, "because the library provided is currently only available as a Visual Studio 2008 '.lib'", meaning the application has to be compiled with Visual Studio...? Any help, thoughts, or ideas are much appreciated! You can find the complete SDK Developer's Guide on the intel site above, but here is their "Hello World" using the C Language API: #include <stdio.h> #include “adpcore.h” int main( int argc, char* argv[] ) { ADP_RET_CODE ret_code; const ADP_APPLICATIONID myApplicationID = {{ 0x12345678,0x11112222,0x33331234,0x567890ab}}; if ((ret_code = ADP_Initialize()) != ADP_SUCCESS ){ printf( “ERROR: exiting” ); exit( -1 ); } if (( ret_code = ADP_IsAuthorized( myApplicationId )) == ADP_AUTHORIZED ) printf( “Hello World” ); else printf( “Not authorized to run” ); exit 0; } 35 Page SDK Developer Guide: http:// appdeveloper.intel.com/sites/files/pages/SDK%20Developer%20Guide.pdf

    Read the article

  • Unable to verify body hash for DKIM

    - by Joshua
    I'm writing a C# DKIM validator and have come across a problem that I cannot solve. Right now I am working on calculating the body hash, as described in Section 3.7 Computing the Message Hashes. I am working with emails that I have dumped using a modified version of EdgeTransportAsyncLogging sample in the Exchange 2010 Transport Agent SDK. Instead of converting the emails when saving, it just opens a file based on the MessageID and dumps the raw data to disk. I am able to successfully compute the body hash of the sample email provided in Section A.2 using the following code: SHA256Managed hasher = new SHA256Managed(); ASCIIEncoding asciiEncoding = new ASCIIEncoding(); string rawFullMessage = File.ReadAllText(@"C:\Repositories\Sample-A.2.txt"); string headerDelimiter = "\r\n\r\n"; int headerEnd = rawFullMessage.IndexOf(headerDelimiter); string header = rawFullMessage.Substring(0, headerEnd); string body = rawFullMessage.Substring(headerEnd + headerDelimiter.Length); byte[] bodyBytes = asciiEncoding.GetBytes(body); byte[] bodyHash = hasher.ComputeHash(bodyBytes); string bodyBase64 = Convert.ToBase64String(bodyHash); string expectedBase64 = "2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8="; Console.WriteLine("Expected hash: {1}{0}Computed hash: {2}{0}Are equal: {3}", Environment.NewLine, expectedBase64, bodyBase64, expectedBase64 == bodyBase64); The output from the above code is: Expected hash: 2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8= Computed hash: 2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8= Are equal: True Now, most emails come across with the c=relaxed/relaxed setting, which requires you to do some work on the body and header before hashing and verifying. And while I was working on it (failing to get it to work) I finally came across a message with c=simple/simple which means that you process the whole body as is minus any empty CRLF at the end of the body. (Really, the rules for Body Canonicalization are quite ... simple.) Here is the real DKIM email with a signature using the simple algorithm (with only unneeded headers cleaned up). Now, using the above code and updating the expectedBase64 hash I get the following results: Expected hash: VnGg12/s7xH3BraeN5LiiN+I2Ul/db5/jZYYgt4wEIw= Computed hash: ISNNtgnFZxmW6iuey/3Qql5u6nflKPTke4sMXWMxNUw= Are equal: False The expected hash is the value from the bh= field of the DKIM-Signature header. Now, the file used in the second test is a direct raw output from the Exchange 2010 Transport Agent. If so inclined, you can view the modified EdgeTransportLogging.txt. At this point, no matter how I modify the second email, changing the start position or number of CRLF at the end of the file I cannot get the files to match. What worries me is that I have been unable to validate any body hash so far (simple or relaxed) and that it may not be feasible to process DKIM through Exchange 2010.

    Read the article

  • shopify_app syntax error

    - by Pete171
    Edit: Debugging has got me further. Question clarified. We have installed Ruby, RubyGems and Rails and have forked the shopify_app project. We have created a new rails applications and added three items to the Gemfile: execjs, therubyracer and shopify_app. Running rails s in order to start our rails application returns this trace: root@ubuntu:/usr/local/pete-shopify/cart# rails s Faraday: you may want to install system_timer for reliable timeouts /var/lib/gems/1.8/gems/shopify_app-4.1.0/lib/shopify_app.rb:15:in `require': /var/lib /gems/1.8/gems/shopify_app-4.1.0/lib/shopify_app/login_protection.rb:5: syntax error, unexpected ':', expecting kEND (SyntaxError) ...rce::UnauthorizedAccess, with: :close_session ^ from /var/lib/gems/1.8/gems/shopify_app-4.1.0/lib/shopify_app.rb:15 from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler/runtime.rb:68:in `require' from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler/runtime.rb:68:in `require' from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler/runtime.rb:66:in `each' from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler/runtime.rb:66:in `require' from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler/runtime.rb:55:in `each' from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler/runtime.rb:55:in `require' from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler.rb:128:in `require' from /usr/local/pete-shopify/cart/config/application.rb:7 from /var/lib/gems/1.8/gems/railties-3.2.8/lib/rails/commands.rb:53:in `require' from /var/lib/gems/1.8/gems/railties-3.2.8/lib/rails/commands.rb:53 from /var/lib/gems/1.8/gems/railties-3.2.8/lib/rails/commands.rb:50:in `tap' from /var/lib/gems/1.8/gems/railties-3.2.8/lib/rails/commands.rb:50 from script/rails:6:in `require' from script/rails:6 I haven't modified any files since forking from Github. Lines 1 - 6 of login_protection.rb are as follows: module ShopifyApp::LoginProtection extend ActiveSupport::Concern included do rescue from ActiveResource::UnauthorizedAccess, with: :close_session end I've looked into this and it seems that the error is caused by a new-style hash syntax between Ruby 1.8 and 1.9; key : value instead of key => value. Running ruby -v from the command line returns ruby 1.9.3p0 (2011-10-30 revision 33570) [x86_64-linux]. This would seem to be OK... but I did some debugging, and inside the file /var/lib/gems/1.8/gems/shopify_app-4.1.0/lib/shopify_app.rb (at the top) by putting this: puts RUBY_VERSION exit It printed 1.8.7. **Why are ruby -v and RUBY_VERSION giving me different results? And am I correct in assuming this is the cause of my problems? Note: To upgrade Ruby I installed the later version with apt-get and then switched to it by using update-alternatives --config ruby and selecting option 2 like this: root@ubuntu:/usr/local/pete-shopify/cart# update-alternatives --config ruby There are 2 choices for the alternative ruby (providing /usr/bin/ruby). Selection Path Priority Status ------------------------------------------------------------ 0 /usr/bin/ruby1.8 50 auto mode 1 /usr/bin/ruby1.8 50 manual mode * 2 /usr/bin/ruby1.9.1 10 manual mode Also note: We're PHP/Python developers so this is all new to us! Summary: 1 - Am I right in determining the cause of the syntax error? 2 - Why does RUBY_VERSION and ruby -v give me different results?

    Read the article

  • WPF MVVM: Convention over Configuration for ResourceDictionary ?

    - by Jeffrey Knight
    Update In the wiki spirit of StackOverflow, here's an update: I spiked Joe White's IValueConverter suggestion below. It works like a charm. I've written a "quickstart" example of this that automates the mapping of ViewModels-Views using some cheap string replacement. If no View is found to represent the ViewModel, it defaults to an "Under Construction" page. I'm dubbing this approach "WPF MVVM White" since it was Joe White's idea. Here are a couple screenshots. The first image is a case of "[SomeControlName]ViewModel" has a corresponding "[SomeControlName]View", based on pure naming convention. The second is a case where the ModelView doesn't have any views to represent it. No more ResourceDictionaries with long ViewModel to View mappings. It's pure naming convention now. I'm hosting a download of the project here: http://rootsilver.com/files/Mvvm.White.Quickstart.zip I'll follow up with a longer blog post walk through. Original Post I read Josh Smith's fantastic MSDN article on WPF MVVM over the weekend. It's destined to be a cult classic. It took me a while to wrap my head around the magic of asking WPF to render the ViewModel. It's like saying "Here's a class, WPF. Go figure out which UI to use to present it." For those who missed this magic, WPF can do this by looking up the View for ModelView in the ResourceDictionary mapping and pulling out the corresponding View. (Scroll down to Figure 10 Supplying a View ). The first thing that jumps out at me immediately is that there's already a strong naming convention of: classNameView ("View" suffix) classNameViewModel ("ViewModel" suffix) My question is: Since the ResourceDictionary can be manipulated programatically, I"m wondering if anyone has managed to Regex.Replace the whole thing away, so the lookup is automatic, and any new View/ViewModels get resolved by virtue of their naming convention? [Edit] What I'm imagining is a hook/interception into ResourceDictionary. ... Also considering a method at startup that uses interop to pull out *View$ and *ViewModel$ class names to build the DataTemplate dictionary in code: //build list foreach .... String.Format("<DataTemplate DataType=\"{x:Type vm:{0} }\"><v:{1} /></DataTemplate>", ...)

    Read the article

  • compiling numpy with sunperf atlas libraries

    - by user288558
    I would like to use the sunperf libraries when compiling scipy and numpy. I tried using setupscons.py which seems to check from SUNPERF libraries, but it didnt recognize where mine are: here is a listing of /pkg/linux/SS12/sunstudio12.1 (thats where the sunperf library lives): wkerzend@mosura:/home/wkerzend>ls /pkg/linux/SS12/sunstudio12.1/lib/ CCios/ libdbx_agent.so@ libsunperf.so.3@ amd64/ libfcollector.so@ libtha.so@ collector.jar@ libfsu.so@ libtha.so.1@ dbxrc@ libfsu.so.1@ locale/ debugging.so@ libfui.so@ make.rules@ er.rc@ libfui.so.1@ rw7/ libblacs_openmpi.so@ librtc.so@ sse2/ libblacs_openmpi.so.1@ libscalapack.so@ stlport4/ libcollectorAPI.so@ libscalapack.so.1@ svr4.make.rules@ libcollectorAPI.so.1@ libsunperf.so@ tools_svc_mgr@ I tried to specify this directory in sites.cfg, but I still get the following errors: Checking if g77 needs dummy main - MAIN__. Checking g77 name mangling - '_', '', lower-case. Checking g77 C compatibility runtime ...-L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 - L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 -L/usr/lib/gcc/x86_64-redhat- linux/3.4.6/../../../../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. -L/lib/../lib64 -L/usr/lib/../lib64 -lfrtbegin -lg2c -lm Checking MKL ... Failed (could not check header(s) : check config.log in build/scons/scipy/integrate for more details) Checking ATLAS ... Failed (could not check header(s) : check config.log in build/scons/scipy/integrate for more details) Checking SUNPERF ... Failed (could not check symbol cblas_sgemm : check config.log in build/scons/scipy/integrate for more details)) Checking Generic BLAS ... yes Checking for BLAS (Generic BLAS) ... Failed: BLAS (Generic BLAS) test could not be linked and run Exception: Could not find F77 BLAS, needed for integrate package: File "/priv/manana1/wkerzend/install_dir/scipy-0.7.1/scipy/integrate/SConstruct", line 2: GetInitEnvironment(ARGUMENTS).DistutilsSConscript('SConscript') File "/home/wkerzend/python_coala/numscons-0.10.1-py2.6.egg/numscons/core/numpyenv.py", line 108: build_dir = '$build_dir', src_dir = '$src_dir') File "/priv/manana1/wkerzend/python_coala/numscons-0.10.1-py2.6.egg/numscons/scons-local/scons-local-1.2.0/SCons/Script/SConscript.py", line 549: return apply(_SConscript, [self.fs,] + files, subst_kw) File "/priv/manana1/wkerzend/python_coala/numscons-0.10.1-py2.6.egg/numscons/scons-local/scons-local-1.2.0/SCons/Script/SConscript.py", line 259: exec _file_ in call_stack[-1].globals File "/priv/manana1/wkerzend/install_dir/scipy-0.7.1/build/scons/scipy/integrate/SConscript", line 15: raise Exception("Could not find F77 BLAS, needed for integrate package") error: Error while executing scons command. See above for more information. If you think it is a problem in numscons, you can also try executing the scons command with --log-level option for more detailed output of what numscons is doing, for example --log-level=0; the lowest the level is, the more detailed the output it.----- any help is appreciated Wolfgang

    Read the article

  • How to merge an improperly created "branch" that isn't really a branch (wasn't created by an svn cop

    - by MatrixFrog
    I'm working on a team with lots of people who are pretty unfamiliar with the concepts of version control systems, and are just kind of doing whatever seems to work, by trial and error. Someone created a "branch" from the trunk that is not ancestrally related to the trunk. My guess is it went something like this: They created a folder in branches. They checked out all the code from the trunk to somewhere on their desktop. They added all that code to the newly created folder as though it was a bunch of brand new files. So the repository isn't aware that all that code is actually just a copy of the trunk. When I look at the history of that branch in TortoiseSVN, and uncheck the "Stop on copy/rename" box, there is no revision that has the trunk (or any other path) under the "Copy from path" column. Then they made lots of changes on their "branch". Meanwhile, others were making lots of changes on the trunk. We tried to do a merge and of course it doesn't work. Because, the trunk and the fake branch are not ancestrally related. I can see only two ways to resolve this: Go through the logs on the "branch", look at every change that was made, and manually apply each change to the trunk. Go through the logs on the trunk, look at every change that was made between revision 540 (when the "branch" was created) and HEAD, and manually apply each change to the "branch". This involves 7 revisions one way or 11 revisions the other way, so neither one is really that terrible. But is there any way to cause the repository to "realize" that the branch really IS ancestrally related even though it was created incorrectly, so that we can take advantage of the built-in merging functionality in Eclipse/TortoiseSVN? (You may be wondering: Why did your company hire these people and allow them to access the SVN repository without making sure they knew how to use it properly first?! We didn't -- this is a school assignment, which is a collaboration between two different classes -- the ones in the lower class were given a very quick hand-wavey "overview" of SVN which didn't really teach them anything. I've asked everyone in the group to please PLEASE read the svn book, and I'll make sure we (the slightly more experienced half of the team) keep a close eye on the repository to ensure this doesn't happen again.)

    Read the article

  • Consuming SharePoint Web Services fails when behind Proxy server

    - by Jan Petersen
    Hi All, I've seen a number of post about consuming Web Services from behind a proxy server, but none that seams to address this problem. I'm building a desktop application, using Java, JAX-WS in NetBeans. I have a working prototype, that can query the server for authentication mode, successfully authenticate and retrieve a list of web site. However, if I run the same app from a network that is behind a proxy server (the proxy does not require authentication), then I'm running into trouble. The normal -dhttp.proxyHost ... settings does not seam to help any. But I have found that by creating a ProxySelector class and setting it as default, I can regain access to the authentication web service, but I still can't retrieve the list of web sites from the SharePoint server. Anyone have any experience on how to make this work? I have put the source text java class files of a demo app up, showing the issue at the following urls (it's a bit to long even in the short demo form to post here). link text When running the code from a network behind a proxy server, I successfully retrieve the Authentication mode from the server, but the request for the Web Site list generates an exception originating at: com.sun.xml.internal.ws.transport.http.client .HttpClientTransport.readResponseCodeAndMessage(HttpClientTransport.java:201) The output from the source when no proxy is on the network is listed below: Successfully retrieved the SharePoint WebService response for Authentication SharePoint authentication method is: WINDOWS Calling Web Service to retrieve list of web site. Web Service call response: -------------- XML START -------------- <Webs xmlns="http://schemas.microsoft.com/sharepoint/soap/"> <Web Title="Collaboration Lab" Url="http://host.domain.com/collaboration"/> <Web Title="Global Data Lists" Url="http://host.domain.com/global_data_lists"/> <Web Title="Landing" Url="http://host.domain.com/Landing"/> <Web Title="SharePoint HelpDesk" Url="http://host.domain.com/helpdesk"/> <Web Title="Program Management" Url="http://host.domain.com/programmanagement"/> <Web Title="Project Site" Url="http://host.domain.com/Project Site"/> <Web Title="SharePoint Administration Tools" Url="http://host.domain.com/admin"/> <Web Title="Space Management Project" Url="http://host.domain.com/spacemgmt"/> </Webs> -------------- XML END -------------- Br Jan

    Read the article

  • Can't create file in Ada 95

    - by duder
    Hello, I'm trying to follow a standard reference for opening files but running into a constraint_error at the line when I call Ada.Text_IO.Create(). It says "range check failed". Any help appreciated, here's the code: WITH Ada.Text_IO; WITH Ada.Integer_Text_IO; USE Ada.Text_IO; USE Ada.Integer_Text_IO; PROCEDURE FileManip IS --Variables Start_Int : Integer; Stop_Int : Integer; Max_Length : Integer; --Output File MaxName : CONSTANT Positive := 80; SUBTYPE NameRange IS Positive RANGE 1..MaxName; OutFileName : String(NameRange) := (OTHERS => '#'); OutNameLength : NameRange; OutData : File_Type; --Array TYPE Chain_Array IS ARRAY(1..500) OF Integer; Sum : Integer := 1; BEGIN --Inputs Ada.Text_IO.Put(Item => "Enter a starting Integer: "); Ada.Integer_Text_IO.Get(Item => Start_Int); Ada.Text_IO.New_Line; Ada.Text_IO.Put(Item => "Enter a stopping Integer: "); Ada.Integer_Text_IO.Get(Item => Stop_Int); Ada.Text_IO.New_Line; Ada.Text_IO.Put(Item => "Enter a Maximum Length to search: "); Ada.Integer_Text_IO.Get(Item => Max_Length); Ada.Text_IO.New_Line; Ada.Text_IO.Put(Item => "Enter a output file name > "); Ada.Text_IO.Get_Line( Item => OutFileName, Last => OutNameLength); Ada.Text_IO.Create( File => OutData, Mode => Ada.Text_IO.Out_File, Name => OutFileName(1..OutNameLength)); Ada.Text_IO.New_Line;

    Read the article

  • Connect to running web role on Azure using Remote Desktop Connection and VS2012

    - by Magnus Karlsson
    We want to be able to collect IntelliTrace information from our running app and also use remote desktop to connect to the IIS and look around(probably debugging). 1. Create certificate 1.1 Right-click the cloud project (marked in red) and select “Configure remote desktop”. 1.2 In the drop down list of certificates, choose <create> at the bottom. 1.3. Follow the instructions, you can set it up with default values. 1.4 When done. Choose the certificate and click “Copy to File…” as seen in the left of the picture above. 1.5. Save the file with any name you want. Now we will save it to local storage to be able to import it to our solution through the azure configuration manager in step 3. 2. Save certificate to local storage Now we need to attach it to our local certificate storage to be able to reach it from our confiuguration manager in visual studio. Microsoft provides the following steps for doing this: http://support.microsoft.com/kb/232137 In order to view the Certificates store on the local computer, perform the following steps: Click Start, and then click Run. Type "MMC.EXE" (without the quotation marks) and click OK. Click Console in the new MMC you created, and then click Add/Remove Snap-in. In the new window, click Add. Highlight the Certificates snap-in, and then click Add. Choose the Computer option and click Next. Select Local Computer on the next screen, and then click OK. Click Close , and then click OK. You have now added the Certificates snap-in, which will allow you to work with any certificates in your computer's certificate store. You may want to save this MMC for later use. Now that you have access to the Certificates snap-in, you can import the server certificate into you computer's certificate store by following these steps: Open the Certificates (Local Computer) snap-in and navigate to Personal, and then Certificates. Note: Certificates may not be listed. If it is not, that is because there are no certificates installed. Right-click Certificates (or Personal if that option does not exist.) Choose All Tasks, and then click Import. When the wizard starts, click Next. Browse to the PFX file you created containing your server certificate and private key. Click Next. Enter the password you gave the PFX file when you created it. Be sure the Mark the key as exportable option is selected if you want to be able to export the key pair again from this computer. As an added security measure, you may want to leave this option unchecked to ensure that no one can make a backup of your private key. Click Next, and then choose the Certificate Store you want to save the certificate to. You should select Personal because it is a Web server certificate. If you included the certificates in the certification hierarchy, it will also be added to this store. Click Next. You should see a summary of screen showing what the wizard is about to do. If this information is correct, click Finish. You will now see the server certificate for your Web server in the list of Personal Certificates. It will be denoted by the common name of the server (found in the subject section of the certificate). Now that you have the certificate backup imported into the certificate store, you can enable Internet Information Services 5.0 to use that certificate (and the corresponding private key). To do this, perform the following steps: Open the Internet Services Manager (under Administrative Tools) and navigate to the Web site you want to enable secure communications (SSL/TLS) on. Right-click on the site and click Properties. You should now see the properties screen for the Web site. Click the Directory Security tab. Under the Secure Communications section, click Server Certificate. This will start the Web Site Certificate Wizard. Click Next. Choose the Assign an existing certificate option and click Next. You will now see a screen showing that contents of your computer's personal certificate store. Highlight your Web server certificate (denoted by the common name), and then click Next. You will now see a summary screen showing you all the details about the certificate you are installing. Be sure that this information is correct or you may have problems using SSL or TLS in HTTP communications. Click Next, and then click OK to exit the wizard. You should now have an SSL/TLS-enabled Web server. Be sure to protect your PFX files from any unwanted personnel. Image of a typical MMC.EXE with the certificates up.   3. Import the certificate to you visual studio project. 3.1 Now right click your equivalent to the MvcWebRole1 (as seen in the first picture under the red oval) and choose properties. 3.2 Choose Certificates. Right click the ellipsis to the right of the “thumbprint” and you should be able to select your newly created certificate here. After selecting it- save the file.   4. Upload the certificate to your Azure subscription. 4.1 Go to the azure management portal, click the services menu icon to the left and choose the service. Click Upload in the bottom menu.     5. Connect to server. Since I tried to use account settings(have to use another name) we have to set up a new name for the connection. No biggie. 5.1 Go to azure management portal, select your service and in the bottom menu, choose “REMOTE”. This will display the configuration for remote connection. It will actually change your ServiceConfiguration.cscfg file. After you change It here it might be good to choose download and replace the one in your project. Set a name that is not your windows azure account name and not Administrator. 5.2 Goto visual studio, click Server Explorer. Choose as selected in the picture below and click “COnnect using remote desktop”.   5.2 You will now be able to log in with the name and password set up in step 5.1. and voila! Windows server 2012, IIS and other nice stuff!   To do this one I’ve been using http://msdn.microsoft.com/en-us/library/windowsazure/ff683671.aspx where you can collect some of this information and additional one.

    Read the article

  • Maven webapp with maven-eclipse-plugin doesn't generates <dependent-module>

    - by chrsk
    I use the eclipse:eclipse goal to generate an Eclipse Project environment. The deployment works fine. The goal creates the var classpath entries for all needed dependencies. With m2eclipse there was the Maven Container which defines an export folder which was WEB-INF/lib for me. But i don't want to rely on m2eclipse so i don't use it anymore. the class path entries which are generated by eclipse:eclipse goal don't have such a export folder. While booting the servlet container with WTP it publishes all resources and classes except the libraries to the context. Whats missing to publish the needed libs, or isn't that possible without m2eclipse integration? Enviroment Eclipse 3.5 JEE Galileo Apache Maven 2.2.1 (r801777; 2009-08-06 21:16:01+0200) Java version: 1.6.0_14 m2eclipse The maven-eclipse-plugin configuration <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-eclipse-plugin</artifactId> <version>2.8</version> <configuration> <projectNameTemplate>someproject-[artifactId]</projectNameTemplate> <useProjectReferences>false</useProjectReferences> <downloadSources>false</downloadSources> <downloadJavadocs>false</downloadJavadocs> <wtpmanifest>true</wtpmanifest> <wtpversion>2.0</wtpversion> <wtpapplicationxml>true</wtpapplicationxml> <wtpContextName>someproject-[artifactId]</wtpContextName> <additionalProjectFacets> <jst.web>2.3</jst.web> </additionalProjectFacets> </configuration> </plugin> The generated files After executing the eclipse:eclipse goal, the dependent-module is not listed in my generated .settings/org.eclipse.wst.common.component, so on server booting i miss the depdencies. This is what i get: <?xml version="1.0" encoding="UTF-8"?> <project-modules id="moduleCoreId" project-version="1.5.0"> <wb-module deploy-name="someproject-core"> <wb-resource deploy-path="/" source-path="src/main/java"/> <wb-resource deploy-path="/" source-path="src/main/webapp"/> <wb-resource deploy-path="/" source-path="src/main/resources"/> </wb-module> </project-modules>

    Read the article

  • JavaScript Image zoom with CSS3 Transforms, How to calculate Origin? (with example)

    - by Sunday Ironfoot
    I'm trying to implement an image zoom effect, a bit like how the zoom works with Google Maps, but with a grid of fix position images. I've uploaded an example of what I have so far here: http://www.dominicpettifer.co.uk/Files/MosaicZoom.html (uses CSS3 transforms so only works with Firefox, Opera, Chrome or Safari) Use your mouse wheel to zoom in/out. The HTML source is basically an outer div with an inner-div, and that inner-div contains 16 images arranged using absolute position. It's going to be a Photo Mosaic basically. I've got the zoom bit working using CSS3 transforms: $(this).find('div').css('-moz-transform', 'scale(' + scale + ')'); ...however, I'm relying on the mouse X/Y position on the outer div to zoom in on where the mouse cursor is, similar to how Google Maps functions. The problem is that if you zoom right in on an image, move the cursor to the bottom/left corner and zoom again, instead of zooming to the bottom/left corner of the image, it zooms to the bottom/left of the entire mosaic. This has the effect of appearing to jump about the mosaic as you zoom in closer while moving the mouse around, even slightly. That's basically the problem, I want the zoom to work exactly like Google Maps where it zooms exactly to where your mouse cursor position is, but I can't get my head around the Maths to calculate the transform-origin: X/Y values correctly. Please help, been stuck on this for 3 days now. Here is the full code listing for the mouse wheel event: var scale = 1; $("#mosaicContainer").mousewheel(function(e, delta) { if (delta > 0) { scale += 1; } else { scale -= 1; } scale = scale < 1 ? 1 : (scale > 40 ? 40 : scale); var x = e.pageX - $(this).offset().left; var y = e.pageY - $(this).offset().top; $(this).find('div').css('-moz-transform', 'scale(' + scale + ')') .css('-moz-transform-origin', x + 'px ' + y + 'px'); return false; });

    Read the article

  • PulpCore music playback - loop sound and animate volume

    - by Peter Perhác
    I have been experimenting with PulpCore, trying to create my own tower defence game (not-playable yet), and I am enjoying it very much I ran into a problem that I can't quite figure out. I extended PulpCore with the JOrbis thing to allow OGG files to be played. Works fine. However, pulpCore seems to have a problem with looping the sound WHILE animating the volume level. I tried this with wav file too, to make sure it isn't jOrbis that breaks it. The code is like this: Sound bgMusic = Sound.load("music/music.ogg"); Playback musicPlayback; ... musicVolume = new Fixed(0.75); musicPlayback = bgMusic.loop(musicVolume); //TODO figure out why it's NOT looping when volume is animated // musicVolume.animate(0, musicVolume.get(), FADE_IN_TIME); This code, for as long as the last line is commented out, plays the music.ogg again and again in an endless loop (which I can stop by calling stop on the Playback object returned from loop(). However, I would like the music to fade in smoothly, so following the advice of the PulpCore API docs, I added the last line which will create the fade-in but the music will only play once and then stop. I wonder why is that? Here is a bit of the documentation: Playback pulpcore.sound.Sound.loop(Fixed level) Loops this sound clip with the specified volume level (0.0 to 1.0). The level may have a property animation attached. Parameters: level Returns: a Playback object for this unique sound playback (one Sound can have many simultaneous Playback objects) or null if the sound could not be played. So what could be the problem? I repeat, with the last line, the sound fades in but doesn't loop, without it it loops but starts with the specified 0.75 volume level. Why can't I animate the volume of the looped music playback? What am I doing wrong? Anyone has any experience with pulpCore and has come across this problem? Anyone could please download PulpCore and try to loop music which fades-in (out)? note: I need to keep a reference to the Playback object returned so I can kill music later.

    Read the article

  • The challenge of communicating externally with IRM secured content

    - by Simon Thorpe
    I am often asked by customers about how they handle sending IRM secured documents to external parties. Their concern is that using IRM to secure sensitive information they need to share outside their business, is troubled with the inability for third parties to install the software which enables them to gain access to the information. It is a very legitimate question and one i've had to answer many times in the past 10 years whilst helping customers plan successful IRM deployments. The operating system does not provide the required level of content security The problem arises from what IRM delivers, persistent security to your sensitive information where ever it resides and whenever it is in use. Oracle IRM gives customers an array of features that help ensure sensitive information in an IRM document or email is always protected and only accessed by authorized users using legitimate applications. Examples of such functionality are; Control of the clipboard, either by disabling completely in the opened document or by allowing the cut and pasting of information between secured IRM documents but not into insecure applications. Protection against programmatic access to the document. Office documents and PDF documents have the ability to be accessed by other applications and scripts. With Oracle IRM we have to protect against this to ensure content cannot be leaked by someone writing a simple program. Securing of decrypted content in memory. At some point during the process of opening and presenting a sealed document to an end user, we must decrypt it and give it to the application (Adobe Reader, Microsoft Word, Excel etc). This process must be secure so that someone cannot simply get access to the decrypted information. The operating system alone just doesn't have the functionality to deliver these types of features. This is why for every IRM technology there must be some extra software installed and typically this software requires administrative rights to do so. The fact is that if you want to have very strong security and access control over a document you are going to send to someone who is beyond your network infrastructure, there must be some software to provide that functionality. Simple installation with Oracle IRM The software used to control access to Oracle IRM sealed content is called the Oracle IRM Desktop. It is a small, free piece of software roughly about 12mb in size. This software delivers functionality for everything a user needs to work with an Oracle IRM solution. It provides the functionality for all formats we support, the storage and transparent synchronization of user rights and unique to Oracle, the ability to search inside sealed files stored on the local computer. In Oracle we've made every technical effort to ensure that installing this software is a simple as possible. In situations where the user's computer is part of the enterprise, this software is typically deployed using existing technologies such as Systems Management Server from Microsoft or by using Active Directory Group Policies. However when sending sealed content externally, you cannot automatically install software on the end users machine. You need to rely on them to download and install themselves. Again we've made every effort for this manual install process to be as simple as we can. Starting with the small download size of the software itself to the simple installation process, most end users are able to install and access sealed content very quickly. You can see for yourself how easily this is done by walking through our free and easy self service demonstration of using sealed content. How to handle objections and ensure there is value However the fact still remains that end users may object to installing, or may simply be unable to install the software themselves due to lack of permissions. This is often a problem with any technology that requires specialized software to access a new type of document. In Oracle, over the past 10 years, we've learned many ways to get over this barrier of getting software deployed by external users. First and I would say of most importance, is the content MUST have some value to the person you are asking to install software. Without some type of value proposition you are going to find it very difficult to get past objections to installing the IRM Desktop. Imagine if you were going to secure the weekly campus restaurant menu and send this to contractors. Their initial response will be, "why on earth are you asking me to download some software just to access your menu!?". A valid objection... there is no value to the user in doing this. Now consider the scenario where you are sending one of your contractors their employment contract which contains their address, social security number and bank account details. Are they likely to take 5 minutes to install the IRM Desktop? You bet they are, because there is real value in doing so and they understand why you are doing it. They want their personal information to be securely handled and a quick download and install of some software is a small task in comparison to dealing with the loss of this information. Be clear in communicating this value So when sending sealed content to people externally, you must be clear in communicating why you are using an IRM technology and why they need to install some software to access the content. Do not try and avoid the issue, you must be clear and upfront about it. In doing so you will significantly reduce the "I didn't know I needed to do this..." responses and also gain respect for being straight forward. One customer I worked with, 6 months after the initial deployment of Oracle IRM, called me panicking that the partner they had started to share their engineering documents with refused to install any software to access this highly confidential intellectual property. I explained they had to communicate to the partner why they were doing this. I told them to go back with the statement that "the company takes protecting its intellectual property seriously and had decided to use IRM to control access to engineering documents." and if the partner didn't respect this decision, they would find another company that would. The result? A few days later the partner had made the Oracle IRM Desktop part of their approved list of software in the company. Companies are successful when sending sealed content to third parties We have many, many customers who send sensitive content to third parties. Some customers actually sell access to Oracle IRM protected content and therefore 99% of their users are external to their business, one in particular has sold content to hundreds of thousands of external users. Oracle themselves use the technology to secure M&A documents, payroll data and security assessments which go beyond the traditional enterprise security perimeter. Pretty much every company who deploys Oracle IRM will at some point be sending those documents to people outside of the company, these customers must be successful otherwise Oracle IRM wouldn't be successful. Because our software is used by a wide variety of companies, some who use it to sell content, i've often run into people i'm sharing a sealed document with and they already have the IRM Desktop installed due to accessing content from another company. The future In summary I would say that yes, this is a hurdle that many customers are concerned about but we see much evidence that in practice, people leap that hurdle with relative ease as long as they are good at communicating the value of using IRM and also take measures to ensure end users can easily go through the process of installation. We are constantly developing new ideas to reducing this hurdle and maybe one day the operating systems will give us enough rich security functionality to have no software installation. Until then, Oracle IRM is by far the easiest solution to balance security and usability for your business. If you would like to evaluate it for yourselves, please contact us.

    Read the article

  • Using target-specific variable in makefile

    - by James Johnston
    I have the following makefile: OUTPUTDIR = build all: v12target v13target v12target: INTDIR = v12 v12target: DoV12.avrcommontargets v13target: INTDIR = v13 v13target: DoV13.avrcommontargets %.avrcommontargets: $(OUTPUTDIR)/%.elf @true $(OUTPUTDIR)/%.elf: $(OUTPUTDIR)/$(INTDIR)/main.o @echo TODO build ELF file from object file: destination $@, source $^ @echo Compiled elf file for $(INTDIR) > $@ $(OUTPUTDIR)/$(INTDIR)/%.o: %.c @echo TODO call GCC to compile C file: destination $@, source $< @echo Compiled object file for $<, revision $(INTDIR) > $@ $(shell rm -rf $(OUTPUTDIR)) $(shell mkdir -p $(OUTPUTDIR)/v12 2> /dev/null) $(shell mkdir -p $(OUTPUTDIR)/v13 2> /dev/null) .SECONDARY: The idea is that there are several different code configurations that need to be compiled from the same source code. The "all" target depends on v12target and v13 target, which set a number of variables for that particular build. It also depends on an "avrcommontargets" pattern, which defines how to actually do the compiling. avrcommontargets then depends on the ELF file, which in turn depends on object files, which are built from the C source code. Each compiled C file results in an object file (*.o). Since each configuration (v12, v13, etc.) results in a different output, the C file needs to be built several times with the output placed in different subdirectories. For example, "build/v12/main.o", "build/v13/main.o", etc. Sample output: TODO call GCC to compile C file: destination build//main.o, source main.c TODO build ELF file from object file: destination build/DoV12.elf, source build//main.o TODO build ELF file from object file: destination build/DoV13.elf, source build//main.o The problem is that the object file isn't going into the correct subdirectory. For example, "build//main.o" instead of "build/v12/main.o". That then prevents the main.o from being correctly rebuilt to generate the v13 version of main.o. I'm guessing the issue is that $(INTDIR) is a target specific variable, and perhaps this can't be used in the pattern targets I defined for %.elf and %.o. The correct output would be: TODO call GCC to compile C file: destination build/v12/main.o, source main.c TODO build ELF file from object file: destination build/DoV12.elf, source build/v12/main.o TODO call GCC to compile C file: destination build/v13/main.o, source main.c TODO build ELF file from object file: destination build/DoV13.elf, source build/v13/main.o What do I need to do to adjust this makefile so that it generates the correct output?

    Read the article

  • Using cascade in NHibernate

    - by Tyler
    I have two classes, call them Monkey and Banana, with a one-to-many bidirectional relationship. Monkey monkey = new Monkey(); Banana banana = new Banana(); monkey.Bananas.Add(banana); banana.Monkey = monkey; hibernateService.Save(banana); When I run that chunk of code, I want both monkey and banana to be persisted. However, it's only persisting both when I explicitly save the monkey and not vice versa. Initially, this made sense since only my Monkey.hbm.xml had a mapping with cascade="all". <set name="Bananas" inverse="true" cascade="all"> <key column="Id"/> <one-to-many class="Banana"/> </set> I figured I just needed to add the following to my Banana.hbm.xml file: <many-to-one name="Monkey" column="Id" cascade="all" /> Unfortunately, this resulted in a Parameter index is out of range error when I tried to run the snippet of code. I investigated this error and found this post, but I still don't see what I'm doing wrong. I have the relationship mapped once on each side as far as I can tell. For full disclosure, here are the two mapping files: Monkey.hbm.xml <class name="Monkey" table="monkies" lazy="true"> <id name="Id"> <generator class="increment" /> </id> <property name="Name" /> <set name="Bananas" inverse="true" cascade="all"> <key column="Id"/> <one-to-many class="Banana"/> </set> </class> Banana.hbm.xml <class name="Banana" table="bananas" lazy="true"> <id name="Id"> <generator class="increment" /> </id> <property name="Name" /> <many-to-one name="Monkey" column="Id" cascade="all" /> </class>

    Read the article

  • TFS CM resource recommendations / some questions

    - by John
    I am working with a small development shop that consists of a group of 5 developers and 1 QA person. We are using TFS and need to get more sophisticated on how we use this tool. Currently the development team checks in their code each evening. A nightly build runs and pushes the output out on a network share. Our QA person uses this build for testing the next day. Sometimes the build off the trunk codebase has issues/bugs that hinder the QA process, and it hasn’t been a giant issue in the past, but we now want to get to a state where we have our QA person testing on a stable QA build. So I believe we need to create a branch (call it QA), and the developers will continue to develop off the trunk, but the QA person will use builds created from code in the QA branch. Seems simple enough, but we have started doing code reviews as well. So we have another desire in that only code that has been code reviewed can be promoted to the QA branch. Each developer works off a TFS item, and when they check in a changeset, they do it against a TFS item which creates a link between a checked in code file and a TFS item. Eventually the TFS item becomes complete and ready for code review. All code attached to the TFS item is reviewed. How can the versions of these files get promoted to the QA branch? In the QA branch, if a bug is found, we want to fix it in the QA branch and have the changes migrated back to the trunk. I believe TFS has a way to automatically do this doesn’t it? Long story short, we want to get to a build and CM environment that I believe is pretty standard, but we are unaware of how to make this happen with TFS. Given our situation above, can someone point out a book or website(s) that would address our specific needs? We would like to make this happen without having to get too deep in CM theory or TFS. I very much appreciate any and all suggestions! Thanks, John

    Read the article

  • Navigating Libgdx Menu with arrow keys or controller

    - by Phil Royer
    I'm attempting to make my menu navigable with the arrow keys or via the d-pad on a controller. So Far I've had no luck. The question is: Can someone walk me through how to make my current menu or any libgdx menu keyboard accessible? I'm a bit noobish with some stuff and I come from a Javascript background. Here's an example of what I'm trying to do: http://dl.dropboxusercontent.com/u/39448/webgl/qb/qb.html For a simple menu that you can just add a few buttons to and it run out of the box use this: http://www.sadafnoor.com/blog/how-to-create-simple-menu-in-libgdx/ Or you can use my code but I use a lot of custom styles. And here's an example of my code: import aurelienribon.tweenengine.Timeline; import aurelienribon.tweenengine.Tween; import aurelienribon.tweenengine.TweenManager; import com.badlogic.gdx.Game; import com.badlogic.gdx.Gdx; import com.badlogic.gdx.Screen; import com.badlogic.gdx.graphics.GL20; import com.badlogic.gdx.graphics.Texture; import com.badlogic.gdx.graphics.g2d.Sprite; import com.badlogic.gdx.graphics.g2d.SpriteBatch; import com.badlogic.gdx.graphics.g2d.TextureAtlas; import com.badlogic.gdx.math.Vector2; import com.badlogic.gdx.scenes.scene2d.Actor; import com.badlogic.gdx.scenes.scene2d.InputEvent; import com.badlogic.gdx.scenes.scene2d.InputListener; import com.badlogic.gdx.scenes.scene2d.Stage; import com.badlogic.gdx.scenes.scene2d.ui.Skin; import com.badlogic.gdx.scenes.scene2d.ui.Table; import com.badlogic.gdx.scenes.scene2d.ui.TextButton; import com.badlogic.gdx.scenes.scene2d.utils.Align; import com.badlogic.gdx.scenes.scene2d.utils.ClickListener; import com.project.game.tween.ActorAccessor; public class MainMenu implements Screen { private SpriteBatch batch; private Sprite menuBG; private Stage stage; private TextureAtlas atlas; private Skin skin; private Table table; private TweenManager tweenManager; @Override public void render(float delta) { Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); batch.begin(); menuBG.draw(batch); batch.end(); //table.debug(); stage.act(delta); stage.draw(); //Table.drawDebug(stage); tweenManager.update(delta); } @Override public void resize(int width, int height) { menuBG.setSize(width, height); stage.setViewport(width, height, false); table.invalidateHierarchy(); } @Override public void resume() { } @Override public void show() { stage = new Stage(); Gdx.input.setInputProcessor(stage); batch = new SpriteBatch(); atlas = new TextureAtlas("ui/atlas.pack"); skin = new Skin(Gdx.files.internal("ui/menuSkin.json"), atlas); table = new Table(skin); table.setBounds(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); // Set Background Texture menuBackgroundTexture = new Texture("images/mainMenuBackground.png"); menuBG = new Sprite(menuBackgroundTexture); menuBG.setSize(Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); // Create Main Menu Buttons // Button Play TextButton buttonPlay = new TextButton("START", skin, "inactive"); buttonPlay.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { ((Game) Gdx.app.getApplicationListener()).setScreen(new LevelMenu()); } }); buttonPlay.addListener(new InputListener() { public boolean keyDown (InputEvent event, int keycode) { System.out.println("down"); return true; } }); buttonPlay.padBottom(12); buttonPlay.padLeft(20); buttonPlay.getLabel().setAlignment(Align.left); // Button EXTRAS TextButton buttonExtras = new TextButton("EXTRAS", skin, "inactive"); buttonExtras.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { ((Game) Gdx.app.getApplicationListener()).setScreen(new ExtrasMenu()); } }); buttonExtras.padBottom(12); buttonExtras.padLeft(20); buttonExtras.getLabel().setAlignment(Align.left); // Button Credits TextButton buttonCredits = new TextButton("CREDITS", skin, "inactive"); buttonCredits.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { ((Game) Gdx.app.getApplicationListener()).setScreen(new Credits()); } }); buttonCredits.padBottom(12); buttonCredits.padLeft(20); buttonCredits.getLabel().setAlignment(Align.left); // Button Settings TextButton buttonSettings = new TextButton("SETTINGS", skin, "inactive"); buttonSettings.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { ((Game) Gdx.app.getApplicationListener()).setScreen(new Settings()); } }); buttonSettings.padBottom(12); buttonSettings.padLeft(20); buttonSettings.getLabel().setAlignment(Align.left); // Button Exit TextButton buttonExit = new TextButton("EXIT", skin, "inactive"); buttonExit.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { Gdx.app.exit(); } }); buttonExit.padBottom(12); buttonExit.padLeft(20); buttonExit.getLabel().setAlignment(Align.left); // Adding Heading-Buttons to the cue table.add().width(190); table.add().width((table.getWidth() / 10) * 3); table.add().width((table.getWidth() / 10) * 5).height(140).spaceBottom(50); table.add().width(190).row(); table.add().width(190); table.add(buttonPlay).spaceBottom(20).width(460).height(110); table.add().row(); table.add().width(190); table.add(buttonExtras).spaceBottom(20).width(460).height(110); table.add().row(); table.add().width(190); table.add(buttonCredits).spaceBottom(20).width(460).height(110); table.add().row(); table.add().width(190); table.add(buttonSettings).spaceBottom(20).width(460).height(110); table.add().row(); table.add().width(190); table.add(buttonExit).width(460).height(110); table.add().row(); stage.addActor(table); // Animation Settings tweenManager = new TweenManager(); Tween.registerAccessor(Actor.class, new ActorAccessor()); // Heading and Buttons Fade In Timeline.createSequence().beginSequence() .push(Tween.set(buttonPlay, ActorAccessor.ALPHA).target(0)) .push(Tween.set(buttonExtras, ActorAccessor.ALPHA).target(0)) .push(Tween.set(buttonCredits, ActorAccessor.ALPHA).target(0)) .push(Tween.set(buttonSettings, ActorAccessor.ALPHA).target(0)) .push(Tween.set(buttonExit, ActorAccessor.ALPHA).target(0)) .push(Tween.to(buttonPlay, ActorAccessor.ALPHA, .5f).target(1)) .push(Tween.to(buttonExtras, ActorAccessor.ALPHA, .5f).target(1)) .push(Tween.to(buttonCredits, ActorAccessor.ALPHA, .5f).target(1)) .push(Tween.to(buttonSettings, ActorAccessor.ALPHA, .5f).target(1)) .push(Tween.to(buttonExit, ActorAccessor.ALPHA, .5f).target(1)) .end().start(tweenManager); tweenManager.update(Gdx.graphics.getDeltaTime()); } public static Vector2 getStageLocation(Actor actor) { return actor.localToStageCoordinates(new Vector2(0, 0)); } @Override public void dispose() { stage.dispose(); atlas.dispose(); skin.dispose(); menuBG.getTexture().dispose(); } @Override public void hide() { dispose(); } @Override public void pause() { } }

    Read the article

  • Using PHP's IMAP library triggers Kaspersky's Antivirus

    - by TMG
    Hello, I just started today working with PHP's IMAP library, and while imap_fetchbody or imap_body are called, it is triggering my Kaspersky antivirus. The viruses are Trojan.Win32.Agent.dmyq and Trojan.Win32.FraudPack.aoda. I am running this off a local development machine with XAMPP and Kaspersky AV. Now, I am sure there are viruses there since there is spam in the box (who doesn't need a some viagra or vicodin these days?). And I know that since the raw body includes attachments and different mime-types, bad stuff can be in the body. So my question is: are there any risks using these libraries? I am assuming that the IMAP functions are retrieving the body, caching it to disk/memory and the AV scanning it sees the data. Is that correct? Are there any known security concerns using this library (I couldn't find any)? Does it clean up cached message parts perfectly or might viral files be sitting somewhere? Is there a better way to get plain text out of the body than this? Right now I am using the following code (credit to Kevin Steffer): function get_mime_type(&$structure) { $primary_mime_type = array("TEXT", "MULTIPART","MESSAGE", "APPLICATION", "AUDIO","IMAGE", "VIDEO", "OTHER"); if($structure->subtype) { return $primary_mime_type[(int) $structure->type] . '/' .$structure->subtype; } return "TEXT/PLAIN"; } function get_part($stream, $msg_number, $mime_type, $structure = false, $part_number = false) { if(!$structure) { $structure = imap_fetchstructure($stream, $msg_number); } if($structure) { if($mime_type == get_mime_type($structure)) { if(!$part_number) { $part_number = "1"; } $text = imap_fetchbody($stream, $msg_number, $part_number); if($structure->encoding == 3) { return imap_base64($text); } else if($structure->encoding == 4) { return imap_qprint($text); } else { return $text; } } if($structure->type == 1) /* multipart */ { while(list($index, $sub_structure) = each($structure->parts)) { if($part_number) { $prefix = $part_number . '.'; } $data = get_part($stream, $msg_number, $mime_type, $sub_structure,$prefix . ($index + 1)); if($data) { return $data; } } // END OF WHILE } // END OF MULTIPART } // END OF STRUTURE return false; } // END OF FUNCTION $connection = imap_open($server, $login, $password); $count = imap_num_msg($connection); for($i = 1; $i <= $count; $i++) { $header = imap_headerinfo($connection, $i); $from = $header->fromaddress; $to = $header->toaddress; $subject = $header->subject; $date = $header->date; $body = get_part($connection, $i, "TEXT/PLAIN"); }

    Read the article

  • Git repo planning questions

    - by masonk
    At work, development uses perforce to handle code sharing. I won't say "revision control", because we aren't allowed to check in changes until they are ready for regression testing. In order to get my personal change sets under revision control, I've been given the go-ahead to build my own git and initialize the client view of the perforce depot as a git repo. There are some difficulties in doing this, however. The client view lives in a subfolder of ~, (~/p4), and I want to put ~ under revision control as well, with its own separate history. I can't figure out how to keep the history for ~ separate from ~/p4 without using a submodule. The problem with a submodule is that it looks like I have to go make a repository that will become the submodule and then git submodule add <repo> <path>. But there is nowhere to make the submodule's repository except in ~. There seems to be no safe place to create the initial client view of the depot with git p4 clone. (I'm working off of the assumption that initing or cloning a repo into a subdirectory of a git repo is not supported. At least, I can find nothing authoritative on nested git repos.) edit: Is merely ignoring ~/p4 in the repo rooted at ~ enough to allow me to init a nested repo in ~/p4? My __git_ps1 function still thinks I'm in a git repository when I visit an ignored subdirectory of a git repo, so I'm inclined to think not. I need the "remote" repository created by git p4 sync to be a branch in ~/p4. We are required to keep all of our code in ~/p4 so that it doesn't get backed up. Can I pull from a "remote" branch that is really a local branch? This one is just for convenience, but I thought I could learn something by asking it. For 99% of the project, I just want to start the with the p4 head revision as the inital commit object. For the other 1%, I would like to suck down the entire p4 history so that I can browse it in git. IOW, after I'm done initalizing it, the initial commit of remotes/p4/master branch will contain: revision 1 of //depot/prod/Foo/Bar/* revision X of other files in //depot/prod/*, where X is the head revision and the remotes/p4/master branch contains Y commits, where Y is the number of changelists that had a file in //depot/prod/Foo/Bar/*, with each commit in the history corresponding to one of those p4 changelists, and HEAD looking like p4's head.

    Read the article

  • How to configure TATA Photon+ EC1261 HUAWEI

    - by user3215
    I'm running ubuntu 10.04. I have a newly purchased TATA Photon+ Internet connection which supports Windows and Mac. On the Internet I found a article saying that it could be configured on Linux. I followed the steps to install it on Ubuntu from this link. I am still not able to get online, and need some help. Also, it is very slow, but I was told that I would see speeds up to 3.1MB. I dont have wvdial installed and cannot install it from apt as I'm not connected to internet Booting from windows I dowloaded "wvdial" .deb package and tried to install on ubuntu but it's ended with dependency problem. Automatically, don't know how, I got connected to internet only for once. Immediately I installed wvdial package after this I followed the tutorials(I could not browse and upload the files here) . From then it's showing that the device is connected in the network connections but no internet connection. Once I disable the device, it won't show as connected again and I'll have to restart my system. Sometimes the device itself not detected(wondering if there is any command to re-read the all devices). output of wvdialconf /etc/wvdial.cof: #wvdialconf /etc/wvdial.conf Editing `/etc/wvdial.conf'. Scanning your serial ports for a modem. ttyS0<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud ttyS0<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 115200 baud ttyS0<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up. Modem Port Scan<*1>: S1 S2 S3 WvModem<*1>: Cannot get information for serial port. ttyUSB0<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud ttyUSB0<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 9600 baud ttyUSB0<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up. WvModem<*1>: Cannot get information for serial port. ttyUSB1<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud ttyUSB1<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 9600 baud ttyUSB1<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up. WvModem<*1>: Cannot get information for serial port. ttyUSB2<*1>: ATQ0 V1 E1 -- OK ttyUSB2<*1>: ATQ0 V1 E1 Z -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK ttyUSB2<*1>: Modem Identifier: ATI -- Manufacturer: +GMI: HUAWEI TECHNOLOGIES CO., LTD ttyUSB2<*1>: Speed 9600: AT -- OK ttyUSB2<*1>: Max speed is 9600; that should be safe. ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK Found a modem on /dev/ttyUSB2. Modem configuration written to /etc/wvdial.conf. ttyUSB2<Info>: Speed 9600; init "ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0" output of wvdial: #wvdial --> WvDial: Internet dialer version 1.60 --> Cannot get information for serial port. --> Initializing modem. --> Sending: ATZ ATZ OK --> Sending: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 OK --> Sending: AT+CRM=1 AT+CRM=1 OK --> Modem initialized. --> Sending: ATDT#777 --> Waiting for carrier. ATDT#777 CONNECT --> Carrier detected. Starting PPP immediately. --> Starting pppd at Sat Oct 16 15:30:47 2010 --> Pid of pppd: 5681 --> Using interface ppp0 --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> local IP address 14.96.147.104 --> pppd: (u;[08]@s;[08]`{;[08] --> remote IP address 172.29.161.223 --> pppd: (u;[08]@s;[08]`{;[08] --> primary DNS address 121.40.152.90 --> pppd: (u;[08]@s;[08]`{;[08] --> secondary DNS address 121.40.152.100 --> pppd: (u;[08]@s;[08]`{;[08] Output of log message /var/log/messages: Oct 16 15:29:44 avyakta-desktop pppd[5119]: secondary DNS address 121.242.190.180 Oct 16 15:29:58 desktop pppd[5119]: Terminating on signal 15 Oct 16 15:29:58 desktop pppd[5119]: Connect time 0.3 minutes. Oct 16 15:29:58 desktop pppd[5119]: Sent 0 bytes, received 177 bytes. Oct 16 15:29:58 desktop pppd[5119]: Connection terminated. Oct 16 15:30:47 desktop pppd[5681]: pppd 2.4.5 started by root, uid 0 Oct 16 15:30:47 desktop pppd[5681]: Using interface ppp0 Oct 16 15:30:47 desktop pppd[5681]: Connect: ppp0 <--> /dev/ttyUSB2 Oct 16 15:30:47 desktop pppd[5681]: CHAP authentication succeeded Oct 16 15:30:47 desktop pppd[5681]: CHAP authentication succeeded Oct 16 15:30:48 desktop pppd[5681]: local IP address 14.96.147.104 Oct 16 15:30:48 desktop pppd[5681]: remote IP address 172.29.161.223 Oct 16 15:30:48 desktop pppd[5681]: primary DNS address 121.40.152.90 Oct 16 15:30:48 desktop pppd[5681]: secondary DNS address 121.40.152.100 EDIT 1 : I tried the following sudo stop network-manager sudo killall modem-manager sudo /usr/sbin/modem-manager --debug > ~/mm.log 2>&1 & sudo /usr/sbin/NetworkManager --no-daemon > ~/nm.log 2>&1 & Output of mm.log: #vim ~/mm.log: ** Message: Loaded plugin Option High-Speed ** Message: Loaded plugin Option ** Message: Loaded plugin Huawei ** Message: Loaded plugin Longcheer ** Message: Loaded plugin AnyData ** Message: Loaded plugin ZTE ** Message: Loaded plugin Ericsson MBM ** Message: Loaded plugin Sierra ** Message: Loaded plugin Generic ** Message: Loaded plugin Gobi ** Message: Loaded plugin Novatel ** Message: Loaded plugin Nokia ** Message: Loaded plugin MotoC Output of nm.log: #vim ~/nm.log: NetworkManager: <info> starting... NetworkManager: <info> modem-manager is now available NetworkManager: SCPlugin-Ifupdown: init! NetworkManager: SCPlugin-Ifupdown: update_system_hostname NetworkManager: SCPluginIfupdown: guessed connection type (eth0) = 802-3-ethernet NetworkManager: SCPlugin-Ifupdown: update_connection_setting_from_if_block: name:eth0, type:802-3-ethernet, id:Ifupdown (eth0), uuid: 681b428f-beaf-8932-dce4-678ed5bae28e NetworkManager: SCPlugin-Ifupdown: addresses count: 1 NetworkManager: SCPlugin-Ifupdown: No dns-nameserver configured in /etc/network/interfaces NetworkManager: nm-ifupdown-connection.c.119 - invalid connection read from /etc/network/interfaces: (1) addresses NetworkManager: SCPluginIfupdown: management mode: unmanaged NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:14.4/0000:02:02.0/net/eth1, iface: eth1) NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:14.4/0000:02:02.0/net/eth1, iface: eth1): no ifupdown configuration found. NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/lo, iface: lo) @

    Read the article

  • How can I read/write data from a file?

    - by samy
    I'm writing a simple chrome extension. I need to create the ability to add sites URLs to a list, or read from the list. I use the list to open the sites in the new tabs. I'm looking for a way to have a data file I can write to, and read from. I was thinking on XML. I read there is a problem changing the content of files with Javascript. Is XML the right choice for this kinda thing? I should add that there is no web server, and the app will run locally, so maybe the problem websites are having are not same as this. Before I wrote this question, I tried one thing, and started to feel insecure because it didn't work. I made a XML file called Site.xml: <?xml version="1.0" encoding="utf-8" ?> <Sites> <site> <url> http://www.sulamacademy.com/AddMsgForum.asp?FType=273171&SBLang=0&WSUAccess=0&LocSBID=20375 </url> </site> <site> <url> http://www.wow.co.il </url> </site> <site> <url> http://www.Google.co.il </url> </site> I made this script to read the data from him, and put in on the html file. function LoadXML() { var ajaxObj = new XMLHttpRequest(); ajaxObj.open('GET', 'Sites.xml', false); ajaxObj.send(); var myXML = ajaxObj.responseXML; document.write('<table border="2">'); var prs = myXML.getElementsByTagName("site"); for (i = 0; i < prs.length; i++) { document.write("<tr><td>"); document.write(prs[i].getElementsByName("url")[0].childNode[0].nodeValue); document.write("</td></tr>"); } document.write("</table"); }

    Read the article

  • Problems using FluentNHibernate + SQLite with .NET4?

    - by stiank81
    I have a WPF application running with VS2010 .Net3.5 using Nhibernate with FluentNHibernate + SQLite, and all works fine. Now I want to change to use .Net4, but this has turned into a more painful experience then I expected.. When setting up the connection a FluentConfigurationException is thrown from FluentConfiguration.BuildConfiguration saying: An invalid or incomplete configuration was used while creating a SessionFactory. Check PotentialReasons collection, and InnerException for more details. The inner exception gives us more information: Could not create the driver from NHibernate.Driver.SQLite20Driver, NHibernate, Version=2.1.2.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4. It has an InnerException again: Exception has been thrown by the target of an invocation. Which again has an InnerException: The IDbCommand and IDbConnection implementation in the assembly System.Data.SQLite could not be found. Ensure that the assembly System.Data.SQLite is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use element in the application configuration file to specify the full name of the assembly. Now - to me it sounds like it doesn't find System.Data.SQLite.dll, but I can't understand this. Everywhere this is referenced I have "Copy Local", and I have verified that it is in every build folder for projects using SQLite. I have also copied it manually to every Debug folder of the solution - without luck. Notes: This is exactly the same code that worked just fine before I upgraded to .Net4. I did see some x64 x86 mismatch problems earlier, but I have switched to use x86 as the target platform and for all referenced dlls. I have verified that all files in the Debug-folder are x86. I have tried the precompiled Fluent dlls, I have tried compiling myself, and I have compiled my own version of Fluent using .Net4. I see that there are also others that have seen this problem, but I haven't really seen any solution yet. After @devio's answer I tried adding a reference to the SQLite dll. This didn't change anything, but I hope I made it right though.. This is what I added to the root node of the app.config file: <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <qualifyAssembly partialName="System.Data.SQLite" fullName="System.Data.SQLite, Version=1.0.60.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139" /> </assemblyBinding> </runtime> Anyone out there using Fluent with .Net4 and SQLite successfully? Help! I'm lost...

    Read the article

  • Mobile BI Comes of Age

    - by rich.clayton(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} One of the hot topics in the Business Intelligence industry is mobility.  More specifically the question is how business can be transformed by the iPhone and the iPad.  In June 2003, Gartner predicted that Mobile BI would be obsolete and that the technology was headed for the 'trough of disillusionment'.  I agreed with them at that time.  Many vendors like MicroStrategy and Business Objects jumped into the fray attempting to show how PDA's like Palm Pilots could be integrated with BI.  Their investments resulted in interesting demos with no commercial traction.  Why, because wireless networks and mobile operating systems were primitive, immature and slow. In my opinion, Apple's iOS has changed everything in Mobile BI.  Yes Blackberry, Android and Symbian and all the rest have their place in the market but I believe that increasingly consumers (not IT departments) influence BI decision making processes.  Consumers are choosing the iPhone and the iPad. The number of iPads I see in business meetings now is staggering.  Some use it for email and note taking and others are starting to use corporate applications.  The possibilities for Mobile BI are countless and I would expect to see iPads enterprise-wide over the next few years.   These new devices will provide just-in-time access to critical business information.  Front-line managers interacting with customers, suppliers, patients or citizens will have information literally at their fingertips. I've experimented with several mobile BI tools.  They look cool but like their Executive Information System (EIS) predecessors of the 1990's these tools lack a backbone and a plausible integration strategy.  EIS was a viral technology in the early 1990's.  Executives from every industry and job function were showcasing their dashboards to fellow co-workers and colleagues at the country club.  Just like the iPad, every senior manager wanted one.  EIS wasn't a device however, it was a software application.   EIS quickly faded into the software sunset as it lacked integration with corporate information systems.  BI servers  replaced EIS because the technology focused on the heavy data lifting of integrating, normalizing, aggregating and managing large, complex data volumes.  The devices are here to stay. The cute stand-alone mobile BI tools, not so much. If all you're looking to do is put Excel files on your iPad, there are plenty of free tools on the market.  You'll look cool at your next management meeting but after a few weeks, the cool factor will fade away and you'll be wondering how you will ever maintain it.  If however you want secure, consistent, reliable information on your iPad, you need an integration strategy and a way to model the data.  BI Server technologies like the Oracle BI Foundation is a market leading approach to tackle that issue. I liken the BI mobility frenzy to buying classic cars.  Classic Cars have two buying groups - teenagers and middle-age folks looking to tinker.  Teenagers look at the pin-stripes and the paint job while middle-agers (like me)  kick the tires a bit and look under the hood to check out the quality and reliability of the engine.  Mobile BI tools sure look sexy but don't go very far without an engine and a transmission or an integration strategy. The strategic question in Mobile BI is can these startups build a motor and transmission faster than Oracle can re-paint the car?  Oracle has a great engine and a transmission that connects to all enterprise information assets.  We're working on the new paint job and are excited about the possibilities.  Just as vertical integration worked in the automotive business, it too works in the technology industry.

    Read the article

  • creating executable jar file for my java application

    - by Manu
    public class createExcel { public void write() throws IOException, WriteException { WorkbookSettings wbSettings = new WorkbookSettings(); wbSettings.setLocale(new Locale("en", "EN")); WritableWorkbook workbook1 =Workbook.createWorkbook(new File(file), wbSettings); workbook1.createSheet("Niru ", 0); WritableSheet excelSheet = workbook1.getSheet(0); createLabel(excelSheet); createContent(excelSheet,list); workbook1.write(); workbook1.close(); } public void createLabel(WritableSheet sheet)throws WriteException { WritableFont times10pt = new WritableFont(WritableFont.createFont("D:\font\trebuct"),8); // Define the cell format times = new WritableCellFormat(times10pt); // Lets automatically wrap the cells times.setWrap(false); WritableFont times10ptBoldUnderline = new WritableFont( WritableFont.createFont("D:\font\trebuct"), 9, WritableFont.BOLD, false, UnderlineStyle.NO_UNDERLINE); timesBoldUnderline = new WritableCellFormat(times10ptBoldUnderline); sheet.setColumnView(0,15); sheet.setColumnView(1,13); // Write a few headers addCaption(sheet, 0, 0, "Business Date"); addCaption(sheet, 1, 0, "Dealer ID"); } private void createContent(WritableSheet sheet, ArrayList list) throws WriteException,RowsExceededException { // Write a few number for (int i = 1; i < 11; i++) { for(int j=0;j<11;j++){ // First column addNumber(sheet, i, j,1); // Second column addNumber(sheet, 1, i, i * i); } } } private void addCaption(WritableSheet sheet, int column, int row, String s) throws RowsExceededException, WriteException { Label label; label = new Label(column, row, s, timesBoldUnderline); sheet.addCell(label); } private void addNumber(WritableSheet sheet, int row,int column, Integer integer) throws WriteException, RowsExceededException { Number number; number = new Number(column,row, integer, times); sheet.addCell(number); } public static void main(String[] args) { JButton myButton0 = new JButton("Advice_Report"); JButton myButton1 = new JButton("Position_Report"); JPanel bottomPanel = new JPanel(); bottomPanel.add(myButton0); bottomPanel.add(myButton1); myButton0.addActionListener(this); myButton1.addActionListener(this); createExcel obj=new createExcel(); obj.setOutputFile("c;\\temp\\swings\\jack.xls"); try{ obj.write(); }catch(Exception e){} } and so on. it working fine. i have jxl.jar and ojdbc14.jar files(need this jar file for Excelsheet creation and DB connection )and createExcel.class(.class file) file. how to make this code as executable jar file.

    Read the article

  • How to break a Hibernate session?

    - by Péter Török
    In the Hibernate reference, it is stated several times that All exceptions thrown by Hibernate are fatal. This means you have to roll back the database transaction and close the current Session. You aren’t allowed to continue working with a Session that threw an exception. One of our legacy apps uses a single session to update/insert many records from files into a DB table. Each recourd update/insert is done in a separate transaction, which is then duly committed (or rolled back in case an error occurred). Then for the next record a new transaction is opened etc. But the same session is used throughout the whole process, even if a HibernateException was caught in the middle. We are using Oracle 9i btw with Hibernate 3.24.sp1 on JBoss 4.2. Reading the above in the book, I realized that this design may fail. So I refactored the app to use a separate session for each record update. In a unit test with a mock session factory, I could prove that it is now requesting a new session for each record update. So far, so good. However, we found no way to reproduce the session failure while testing the whole app (would this be a stress test btw, or ...?). We thought of shutting down the listener of the DB but we realized that the app is keeping a bunch of connections open to the DB, and the listener would not affect those connections. (This is a web app, activated once every night by a scheduler, but it can also be activated via the browser.) Then we tried to kill some of those connections in the DB while the app was processing updates - this resulted in some failed updates, but then the app happily continued. Apparently Hibernate is clever enough to reopen broken connections under the hood without breaking the whole session. So this might not be a critical issue, as our app seems to be robust enough even in its original form. However, the issue keeps bugging me. I would like to know: Under what circumstances does the Hibernate session really become unusable after a HibernateException was thrown? How to reproduce this in a test? (What's the proper term for such a test?)

    Read the article

< Previous Page | 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581  | Next Page >