Search Results

Search found 29203 results on 1169 pages for 'state machine workflow'.

Page 394/1169 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • Accessing py2exe program over network in Windows 98 throws ImportErrors

    - by darvids0n
    I'm running a py2exe-compiled python program from one server machine on a number of client machines (mapped to a network drive on every machine, say W:). For Windows XP and later machines, have so far had zero problems with Python picking up W:\python23.dll (yes, I'm using Python 2.3.5 for W98 compatibility and all that). It will then use W:\zlib.pyd to decompress W:\library.zip containing all the .pyc files like os and such, which are then imported and the program runs no problems. The issue I'm getting is on some Windows 98 SE machines (note: SOME Windows 98 SE machines, others seem to work with no apparent issues). What happens is, the program runs from W:, the W:\python23.dll is, I assume, found (since I'm getting Python ImportErrors, we'd need to be able to execute a Python import statement), but a couple of things don't work: 1) If W:\library.zip contains the only copy of the .pyc files, I get ZipImportError: can't decompress data; zlib not available (nonsense, considering W:\zlib.pyd IS available and works fine with the XP and higher machines on the same network). 2) If the .pyc files are actually bundled INSIDE the python exe by py2exe, OR put in the same directory as the .exe, OR put into a named subdirectory which is then set as part of the PYTHONPATH variable (e.g W:\pylib), I get ImportError: no module named os (os is the first module imported, before sys and anything else). Come to think of it, sys.path wouldn't be available to search if os was imported before it maybe? I'll try switching the order of those imports but my question still stands: Why is this a sporadic issue, working on some networks but not on others? And how would I force Python to find the files that are bundled inside the very executable I run? I have immediate access to the working Windows 98 SE machine, but I only get access to the non-working one (a customer of mine) every morning before their store opens. Thanks in advance! EDIT: Okay, big step forward. After debugging with PY2EXE_VERBOSE, the problem occurring on the specific W98SE machine is that it's not using the right path syntax when looking for imports. Firstly, it doesn't seem to read the PYTHONPATH environment variable (there may be a py2exe-specific one I'm not aware of, like PY2EXE_VERBOSE). Secondly, it only looks in one place before giving up (if the files are bundled inside the EXE, it looks there. If not, it looks in library.zip). EDIT 2: In fact, according to this, there is a difference between the sys.path in the Python interpreter and that of Py2exe executables. Specifically, sys.path contains only a single entry: the full pathname of the shared code archive. Blah. No fallbacks? Not even the current working directory? I'd try adding W:\ to PATH, but py2exe doesn't conform to any sort of standards for locating system libraries, so it won't work. Now for the interesting bit. The path it tries to load atexit, os, etc. from is: W:\\library.zip\<module>.<ext> Note the single slash after library.zip, but the double slash after the drive letter (someone correct me if this is intended and should work). It looks like if this is a string literal, then since the slash isn't doubled, it's read as an (invalid) escape sequence and the raw character is printed (giving W:\library.zipos.pyd, W:\library.zipos.dll, ... instead of with a slash); if it is NOT a string literal, the double slash might not be normpath'd automatically (as it should be) and so the double slash confuses the module loader. Like I said, I can't just set PYTHONPATH=W:\\library.zip\\ because it ignores that variable. It may be worth using sys.path.append at the start of my program but hard-coding module paths is an absolute LAST resort, especially since the problem occurs in ONE configuration of an outdated OS. Any ideas? I have one, which is to normpath the sys.path.. pity I need os for that. Another is to just append os.getenv('PATH') or os.getenv('PYTHONPATH') to sys.path... again, needing the os module. The site module also fails to initialise, so I can't use a .pth file. I also recently tried the following code at the start of the program: for pth in sys.path: fErr.write(pth) fErr.write(' to ') pth.replace('\\\\','\\') # Fix Windows 98 pathing issues fErr.write(pth) fErr.write('\n') But it can't load linecache.pyc, or anything else for that matter; it can't actually execute those commands from the looks of things. Is there any way to use built-in functionality which doesn't need linecache to modify the sys.path dynamically? Or am I reduced to hard-coding the correct sys.path?

    Read the article

  • Why is the form action attribute empty on production server?

    - by Ozzy
    After deploying a ASP.NET WebForms application to a production server some of the ajax calls has stopped working for me. After some debugging I found out that the following statement (in the internal client-method WebForm_DoCallback) was causing the problem: xmlRequest.open("POST", action, true); The problem seemed to be that the action-variable was empty so after checking the rendered html I found out that the form-tag rendered on the production server looks like: <form method="post" action="" id="Form1"> However, on my developer machine I get the following: <form method="post" action="default.aspx" id="Form1"> So, why would the action-attibute render on my dev.machine but not on the production server? This seems to be the case only for one specific web form, when I look on other pages the action-attribute renders correctly. Any suggestions or ideas would be helpful!

    Read the article

  • installing simplejson on the google appengine

    - by user266564
    Super nub question time! I am trying to use simplejson on the google appengine. In a terminal on my machine I have simplejson installed and working. But my when I try to import it in a script running on the appengine I get an error saying no such library exists. If open the interactive console on my machine (from the link on http://localhost:8080/_ah/admin) and type "import simplejson" I get: Traceback (most recent call last): File "/home/chris/google_appengine/google/appengine/ext/admin/init.py", line 210, in post exec(compiled_code, globals()) File "", line 1, in ImportError: No module named simplejson Any thoughts?

    Read the article

  • Tomcat on Windows x64 using 32-bit JDK

    - by Erik
    Hoping someone can help. The rub: I can't get Tomcat 5.5 to start as a windows service on 64-bit windows using a 32-bit JDK. the details: I've been running Tomcat 5.5 on Windows Server 2008 (x64) as a service for some time using a 64-bit JDK. I'm being forced to install a 32-bit JDK on this 64-bit machine so I can make use of the Java JAI libraries (no 64-bit JAI version). I have to run Tomcat using this 32-bit JDK. I can run Tomcat using the 32-bit JDK if I start it using /bin/startup.bat Problem is, it will not start as a windows service. I'm using the Tomcat bundled procrun executables. Has anyone had success starting Tomcat as a service using a 32-bit JDK on a 64-bit machine? Thanks for your expertise.

    Read the article

  • Use a Fake Http Channel to Unit Test with HttpClient

    - by Steve Michelotti
    Applications get data from lots of different sources. The most common is to get data from a database or a web service. Typically, we encapsulate calls to a database in a Repository object and we create some sort of IRepository interface as an abstraction to decouple between layers and enable easier unit testing by leveraging faking and mocking. This works great for database interaction. However, when consuming a RESTful web service, this is is not always the best approach. The WCF Web APIs that are available on CodePlex (current drop is Preview 3) provide a variety of features to make building HTTP REST services more robust. When you download the latest bits, you’ll also find a new HttpClient which has been updated for .NET 4.0 as compared to the one that shipped for 3.5 in the original REST Starter Kit. The HttpClient currently provides the best API for consuming REST services on the .NET platform and the WCF Web APIs provide a number of extension methods which extend HttpClient and make it even easier to use. Let’s say you have a client application that is consuming an HTTP service – this could be Silverlight, WPF, or any UI technology but for my example I’ll use an MVC application: 1: using System; 2: using System.Net.Http; 3: using System.Web.Mvc; 4: using FakeChannelExample.Models; 5: using Microsoft.Runtime.Serialization; 6:   7: namespace FakeChannelExample.Controllers 8: { 9: public class HomeController : Controller 10: { 11: private readonly HttpClient httpClient; 12:   13: public HomeController(HttpClient httpClient) 14: { 15: this.httpClient = httpClient; 16: } 17:   18: public ActionResult Index() 19: { 20: var response = httpClient.Get("Person(1)"); 21: var person = response.Content.ReadAsDataContract<Person>(); 22:   23: this.ViewBag.Message = person.FirstName + " " + person.LastName; 24: 25: return View(); 26: } 27: } 28: } On line #20 of the code above you can see I’m performing an HTTP GET request to a Person resource exposed by an HTTP service. On line #21, I use the ReadAsDataContract() extension method provided by the WCF Web APIs to serialize to a Person object. In this example, the HttpClient is being passed into the constructor by MVC’s dependency resolver – in this case, I’m using StructureMap as an IoC and my StructureMap initialization code looks like this: 1: using StructureMap; 2: using System.Net.Http; 3:   4: namespace FakeChannelExample 5: { 6: public static class IoC 7: { 8: public static IContainer Initialize() 9: { 10: ObjectFactory.Initialize(x => 11: { 12: x.For<HttpClient>().Use(() => new HttpClient("http://localhost:31614/")); 13: }); 14: return ObjectFactory.Container; 15: } 16: } 17: } My controller code currently depends on a concrete instance of the HttpClient. Now I *could* create some sort of interface and wrap the HttpClient in this interface and use that object inside my controller instead – however, there are a few why reasons that is not desirable: For one thing, the API provided by the HttpClient provides nice features for dealing with HTTP services. I don’t really *want* these to look like C# RPC method calls – when HTTP services have REST features, I may want to inspect HTTP response headers and hypermedia contained within the message so that I can make intelligent decisions as to what to do next in my workflow (although I don’t happen to be doing these things in my example above) – this type of workflow is common in hypermedia REST scenarios. If I just encapsulate HttpClient behind some IRepository interface and make it look like a C# RPC method call, it will become difficult to take advantage of these types of things. Second, it could get pretty mind-numbing to have to create interfaces all over the place just to wrap the HttpClient. Then you’re probably going to have to hard-code HTTP knowledge into your code to formulate requests rather than just “following the links” that the hypermedia in a message might provide. Third, at first glance it might appear that we need to create an interface to facilitate unit testing, but actually it’s unnecessary. Even though the code above is dependent on a concrete type, it’s actually very easy to fake the data in a unit test. The HttpClient provides a Channel property (of type HttpMessageChannel) which allows you to create a fake message channel which can be leveraged in unit testing. In this case, what I want is to be able to write a unit test that just returns fake data. I also want this to be as re-usable as possible for my unit testing. I want to be able to write a unit test that looks like this: 1: [TestClass] 2: public class HomeControllerTest 3: { 4: [TestMethod] 5: public void Index() 6: { 7: // Arrange 8: var httpClient = new HttpClient("http://foo.com"); 9: httpClient.Channel = new FakeHttpChannel<Person>(new Person { FirstName = "Joe", LastName = "Blow" }); 10:   11: HomeController controller = new HomeController(httpClient); 12:   13: // Act 14: ViewResult result = controller.Index() as ViewResult; 15:   16: // Assert 17: Assert.AreEqual("Joe Blow", result.ViewBag.Message); 18: } 19: } Notice on line #9, I’m setting the Channel property of the HttpClient to be a fake channel. I’m also specifying the fake object that I want to be in the response on my “fake” Http request. I don’t need to rely on any mocking frameworks to do this. All I need is my FakeHttpChannel. The code to do this is not complex: 1: using System; 2: using System.IO; 3: using System.Net.Http; 4: using System.Runtime.Serialization; 5: using System.Threading; 6: using FakeChannelExample.Models; 7:   8: namespace FakeChannelExample.Tests 9: { 10: public class FakeHttpChannel<T> : HttpClientChannel 11: { 12: private T responseObject; 13:   14: public FakeHttpChannel(T responseObject) 15: { 16: this.responseObject = responseObject; 17: } 18:   19: protected override HttpResponseMessage Send(HttpRequestMessage request, CancellationToken cancellationToken) 20: { 21: return new HttpResponseMessage() 22: { 23: RequestMessage = request, 24: Content = new StreamContent(this.GetContentStream()) 25: }; 26: } 27:   28: private Stream GetContentStream() 29: { 30: var serializer = new DataContractSerializer(typeof(T)); 31: Stream stream = new MemoryStream(); 32: serializer.WriteObject(stream, this.responseObject); 33: stream.Position = 0; 34: return stream; 35: } 36: } 37: } The HttpClientChannel provides a Send() method which you can override to return any HttpResponseMessage that you want. You can see I’m using the DataContractSerializer to serialize the object and write it to a stream. That’s all you need to do. In the example above, the only thing I’ve chosen to do is to provide a way to return different response objects. But there are many more features you could add to your own re-usable FakeHttpChannel. For example, you might want to provide the ability to add HTTP headers to the message. You might want to use a different serializer other than the DataContractSerializer. You might want to provide custom hypermedia in the response as well as just an object or set HTTP response codes. This list goes on. This is the just one example of the really cool features being added to the next version of WCF to enable various HTTP scenarios. The code sample for this post can be downloaded here.

    Read the article

  • Windows Disk I/O Analysis

    - by Jonathon
    It appears that we are having a problem with the disk i/o speed on our Windows 2003 Enterprise Edition server (64-bit). As we were initializing a database that created two 1G tablespaces on 3 different machines, it became obvious that the two smaller machines (each 32-bit Windows 2003 Standard Edition with less RAM) killed the larger machine when creating the files. The larger machine took 10x as long to create the tablespaces than did the other machines. Now, I am left wondering how that could be. What programs or scripts would you guys recommend for tracking down the I/O problem? I think the issue may be with the controller card (all boxes are hardware RAID 10, but have different controller cards), but I would like to check the actual disk I/O speed as well, so I have some hard numbers to work with. Any help would be appreciated.

    Read the article

  • heroku complaining about my public key created by ssh-keygen2

    - by yuri
    I am trying to access my heroku app from work (windows machine). I installed cygwin on the machine and generated ssh-key as well. However, I get the below error: C:heroku keys:add "C:\cygwin\home\4541450\[email protected]" Uploading ssh public key C:\cygwin\home\4541450\[email protected] Enter your Heroku credentials. Email: [email protected] Password: Uploading ssh public key C:\cygwin\home\c54550\[email protected] ! Contents Invalid public key / Contents Invalid public key / Fingerprint can 't be blank I generated the ssh key with the command below. ssh-keygen2 "[email protected]" -t rsa ssh-keygen is not available with this cygwin.

    Read the article

  • Code working in jsFiddle but not in browser...

    - by Jascination
    I'm trying to run the following code, written on my local machine: http://www.jsfiddle.net/WShdx/3/ Functionality-wise (ignore the broken images in the previous and next buttons, and the wrongly-sized main images) the click/hover function is working properly in jsfiddle. However, on my local machine it's not working at all. For all intents and purposes the code is identical, except the local copy has this head section to load in the javascript/css files that are contained within the jsfiddle page: <head> <title>Jquery Wizard</title> <link href="style.css" rel="stylesheet" type="text/css" /> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js"></script> <script src="wizard.js" type="text/javascript" ></script> </head> Is there some wonderful function of jsFiddle that is making my code magically work, or am I missing something here?

    Read the article

  • Using makecert for Development SSL

    - by John
    Here's my situation: I'm trying to create a SSL certificate that will be installed on all developer's machine's, along with two internal servers (everything is non-production). What do I need to do to create a certificate that can be installed in all of these places? Right now I've got something along these lines, using the makecert application in Microsoft Visual Studio 8\SDK\v2.0\Bin: makecert -r -pe -n "CN=MySite.com Dev" -b 01/01/2000 -e 01/01/2033 -eku 1.3.6.1.5.5.7.3.1 -ss Root -sr localMachine -sky exchange -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 mycert.cer However, I'm not sure as to how to place this .cer file on the other computers, and when I install it on my local machine IIS, everytime I visit a page via https:, I get the security prompt (even after I've installed the certificate). Has anyone done this before?

    Read the article

  • Deploying a Rails App to Multiple Servers using Capistrano - Best Practices

    - by Louise
    I have a rails application that I need to deploy to 3 servers - machine1.com, machine2.com and machine3.com. I want to be able to deploy it to all machines at once and each machine individually. Can someone help me out with a skeleton Capistrano config file / recipe? Should it all be in deploy.rb or should I break it out in machine1.rb, etc? I thought I was on the right track getting Capistrano to take in command line arguments, but it choked when I tried set the roles within the namespaces. I'd pass in 'hosts=1,2,3' as an argument and set the role:app/web/db to "machine#{host}.com" after splitting on the command and going into an each do |host| {}... Anyway, other than creating 4 different deploy.rb files and renaming it before running cap:deploy each time, I'm stumped. I'd like to be able to do the following: cap deploy:machine1:latest_version_from_svn cap deploy:all_machines:latest:version_from_svn Just don't know if it should all be in deploy.rb split up with namespaces or if it should be broken into multiple deploy*.rb files.

    Read the article

  • What are possible reasons for java.io.IOException: "The filename, directory name, or volume label sy

    - by Turismo
    I am trying to copy a file using the following code: File targetFile = new File(targetPath + File.separator + filename); ... targetFile.createNewFile(); fileInputStream = new FileInputStream(fileToCopy); fileOutputStream = new FileOutputStream(targetFile); byte[] buffer = new byte[64*1024]; int i = 0; while((i = fileInputStream.read(buffer)) != -1) { fileOutputStream.write(buffer, 0, i); } For some users the targetFile.createNewFile results in this exception: java.io.IOException: The filename, directory name, or volume label syntax is incorrect at java.io.WinNTFileSystem.createFileExclusively(Native Method) at java.io.File.createNewFile(File.java:850) Filename and directory name seem to be correct. The directory targetPath is even checked for existence before the copy code is executed and the filename looks like this: AB_timestamp.xml The user has write permissions to the targetPath and can copy the file without problems using the OS. As I don't have access to a machine this happens on yet and can't reproduce the problem on my own machine I turn to you for hints on the reason for this exception.

    Read the article

  • Security exception with ASP.NET AJAX toolkit

    - by Rod
    I've got an ASP.NET WebForms app that I've written, which uses the ASP.NET AJAX Toolkit. I've put the MultiView control onto the web form, and it worked fine, when I had it under Vista. Well, I had to replace my machine (HD failed) and I went to Windows 7 Ultimate. I tried copying the ASP.NET app from the system (before it finally failed for good) and put it onto the Windows 7 machine. I can bring up the app fine, go to all pages, but the one with these controls on it. When I do I get the following error: Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Request for the permission of type 'System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. What's going on? How do I fix it?

    Read the article

  • Jrun Server crashes when a page has cfform,cfgrid,cflayout etc..

    - by kayteen
    Hi, I am having a weird problem. I have a application that works perfect in my development machine and UAT machine which is of windows 2003 server/cf8. When i uploaded the same application on Solaris box with CF8, and access the site it works perfect until i hit the page that has CFFORM, CFLAYOUT, CFGRID.. etc.. The Jrun Server just crashes[jrpp-2 unexpected constant#48...]. There is nothing available in any of the logs. Please help me how to resolve this thing...!! Thanks, Bittoo More Info: http://forums.adobe.com/thread/605411?tstart=0

    Read the article

  • How to resolve "dpkg: error processing /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb"?

    - by raz7588
    Update Manager will not update although I have over 100 updates to do I get a error message like this: installArchives() failed: Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... (Reading database ... (Reading database ... 5%% (Reading database ... 10%% (Reading database ... 15%% (Reading database ... 20%% (Reading database ... 25%% (Reading database ... 30%% (Reading database ... 35%% (Reading database ... 40%% (Reading database ... 45%% (Reading database ... 50%% (Reading database ... 55%% (Reading database ... 60%% (Reading database ... 65%% (Reading database ... 70%% (Reading database ... 75%% (Reading database ... 80%% (Reading database ... 85%% (Reading database ... 90%% (Reading database ... 95%% (Reading database ... 100%% (Reading database ... 189751 files and directories currently installed.) Preparing to replace python-problem-report 2.0.1-0ubuntu7 (using .../python-problem-report_2.0.1-0ubuntu9_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-problem-report_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace python-apport 2.0.1-0ubuntu7 (using .../python-apport_2.0.1-0ubuntu9_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace apport 2.0.1-0ubuntu7 (using .../apport_2.0.1-0ubuntu9_all.deb) ... apport stop/waiting Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/apport_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already apport start/running Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace gnome-orca 3.4.1-0ubuntu0.1 (using .../gnome-orca_3.4.2-0ubuntu0.1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/gnome-orca_3.4.2-0ubuntu0.1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace python-piston-mini-client 0.7.2-0ubuntu1 (using .../python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace oneconf 0.2.8 (using .../oneconf_0.2.8.1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/oneconf_0.2.8.1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace software-center 5.2.2 (using .../software-center_5.2.2.2_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/software-center_5.2.2.2_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace libglade2-0 1:2.6.4-1ubuntu1 (using .../libglade2-0_1%%3a2.6.4-1ubuntu1.1_amd64.deb) ... Unpacking replacement libglade2-0 ... Preparing to replace libv4l-0 0.8.6-1ubuntu1 (using .../libv4l-0_0.8.6-1ubuntu2_amd64.deb) ... De-configuring libv4l-0:i386 ... Unpacking replacement libv4l-0 ... Preparing to replace libv4l-0:i386 0.8.6-1ubuntu1 (using .../libv4l-0_0.8.6-1ubuntu2_i386.deb) ... Unpacking replacement libv4l-0:i386 ... Preparing to replace libv4lconvert0:i386 0.8.6-1ubuntu1 (using .../libv4lconvert0_0.8.6-1ubuntu2_i386.deb) ... De-configuring libv4lconvert0 ... Unpacking replacement libv4lconvert0:i386 ... Preparing to replace libv4lconvert0 0.8.6-1ubuntu1 (using .../libv4lconvert0_0.8.6-1ubuntu2_amd64.deb) ... Unpacking replacement libv4lconvert0 ... Errors were encountered while processing: /var/cache/apt/archives/python-problem-report_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/apport_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/gnome-orca_3.4.2-0ubuntu0.1_all.deb /var/cache/apt/archives/python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb /var/cache/apt/archives/oneconf_0.2.8.1_all.deb /var/cache/apt/archives/software-center_5.2.2.2_all.deb Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up libglade2-0 (1:2.6.4-1ubuntu1.1) ... dpkg: error processing gnome-orca (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. dpkg: error processing python-problem-report (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Setting up libv4lconvert0 (0.8.6-1ubuntu2) ... Setting up libv4lconvert0:i386 (0.8.6-1ubuntu2) ... dpkg: error processing python-piston-mini-client (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Setting up libv4l-0 (0.8.6-1ubuntu2) ... Setting up libv4l-0:i386 (0.8.6-1ubuntu2) ... dpkg: dependency problems prevent configuration of python-apport: python-apport depends on python-problem-report (>= 0.94); however: Package python-problem-report is not configured yet. dpkg: error processing python-apport (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of software-center: software-center depends on python-piston-mini-client (>= 0.1+bzr29); however: Package python-piston-mini-client is not configured yet. dpkg: error processing software-center (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of oneconf: oneconf depends on python-piston-mini-client (>= 0.3+bzr32-0ubuntu1); however: Package python-piston-mini-client is not configured yet. dpkg: error processing oneconf (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of apport: apport depends on python-apport (>= 2.0.1-0ubuntu7); however: Package python-apport is not configured yet. dpkg: error processing apport (--configure): dependency problems - leaving unconfigured Processing triggers for libc-bin ... ldconfig deferred processing now taking place This has been going on for two weeks now and I cannot get any updates. Any help would be great.

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • Strcpy() corrupts the copied string in Solaris but not Linux

    - by strictlyrude27
    Hi all, I'm writing a C code for a class. This class requires that our code compile and run on the school server, which is a sparc solaris machine. I'm running Linux x64. I have this line to parse (THIS IS NOT ACTUAL CODE BUT IS INPUT TO MY PROGRAM): while ( cond1 ){ I need to capture the "while" and the "cond1" into separate strings. I've been using strtok() to do this. In Linux, the following lines: char *cond = NULL; cond = (char *)malloc(sizeof(char)); memset(cond, 0, sizeof(char)); strcpy(cond, strtok(NULL, ": \t\(){")); //already got the "while" out of the line will correctly capture the string "cond1".Running this on the solaris machine, however, gives me the string "cone1". Note that in plenty of other cases within my program, strings are being copied correctly. (For instance, the "while") was captured correctly. Does anyone know what is going on here?

    Read the article

  • .odx Files in BizTalk 2009 (VS2008 IDE) Fail to open. "Unspecified Error"

    - by AllenG
    Okay, I'm not sure if this is an SO question, or a ServerFault question, but I figured I'd post here first. I have a BizTalk project which works like a champ for its original design. It's been deployed and it's working fine. Today, I went in to add some new fuctionality by modifying one of my orchestrations. When I attempted to open it, I got a message which simply stated: "The operation could not be completed. Unspecified error." I've closed the IDE and re-opened; I've restarted the machine; I've even allowed Microsoft Updates to run and restart the machine. Everything else opens just fine (.xsd files, .btms, etc.) so it appears only to be the orchestrations which are failing. Has anyone ever encountered a similar issue and resolved it (short of reinstalling BizTalk/VS, or blowing the orchs away and rebuilding)? Any help would be appreciated.

    Read the article

  • Weird stack trace in exception "The incoming request does not match any route"

    - by Tassadaque
    i have published asp.net mvc application on iis 6 on the server(windows server 2003) from local machine. On server i have set the default page to default.aspx. but when i try to browse the site on server, it gives me exception "The incoming request does not match any route" One thing i noticed is that. Stack trace on line 5 is shown below. it has one weird thing that exception is still pointing to my local machine path [HttpException (0x80004005): The incoming request does not match any route.] System.Web.Routing.UrlRoutingHandler.ProcessRequest(HttpContextBase httpContext) +15589 System.Web.Routing.UrlRoutingHandler.ProcessRequest(HttpContext httpContext) +40 System.Web.Routing.UrlRoutingHandler.System.Web.IHttpHandler.ProcessRequest(HttpContext context) +7 **UserManagement._Default.Page_Load(Object sender, EventArgs e) in D:\Evoletpublishnew\UserManagement\UserManagement\Default.aspx.cs:18** System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +14 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +35 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627

    Read the article

  • Java_swt in eclipse

    - by Ken
    Can someone tell me where I can find the executeable "java_swt"? I see multiple sites that say it is embedded in eclipse, and other sites say it is shipped with Mac swt drops. I have the zip file for a mac called "swt-3.5M6-carbon-macosx.zip" and i have the eclipse IDE installed on my test mac machine and windows machine. But i cannot find this executeable which i need to run an swt app smoothly on mac os x. Any help would be appreciated. Thanks.

    Read the article

  • asp mvc textreader

    - by czuroski
    Hello, I am trying to use a textreader to retrieve data from a text file on a local machine. I have an upload routine to get the file for the reader to open. When I run the application locally, it works fine. When I publish my application to my server, I am getting a Could not find a part of the path error. I assume this error has to do with permission in that the server cannot read from the local file system, but I don't know the proper way to remedy this. do I set up the application to impersonate the user? I know I can't possibly set up each local users machine to allow the iusr account to access it. Thanks for any thoughts.

    Read the article

  • Centralized Credentials Service For Various Apps

    - by Vlad
    We are researching the possibility to build a centralized credentials storage for internal applications. These apps (vb6, vb.net, web apps in asp.net, etc) are using various instances of SQL servers and iSeries. We want to implement a central credentials facility that would act as a security broker. Basically it should work like this: Client app supplies AppID (I am Sales Application) and EnvironmentID (I am running in QA environment) and in return will get either a connection object (preferred) or encrypted connection string that will allow said application to connect to resources it needs. There will be cases when application needs to connect to two (or more) database resources (i.e. to SQL and iSeries). What are looking at DP API at the moment, but I am not convinced that DP API is the solution as it tied in with machine key. In our case using machine key isn't feasible, so I want to know if there are other approaches available.

    Read the article

  • A generic error occurred in GDI+.

    - by Pinu
    A generic error occurred in GDI+ [ExternalException (0x80004005): A generic error occurred in GDI+.] System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder, EncoderParameters encoderParams) +615257 I have a webpage in which a pdf is converted to png, and using the response.outputstream it is displayed. when i run this on my local machine is works fine. but when i run the same code on server is throws this exception. my question is , where i could look in for error. as there is no inner exception or any other information provided. only clue is that it's happening on Image.Save , but the same code works perfectly fine on my local machine , then y is it not working on production???

    Read the article

  • How to run a DHCP service on Windows 7 Home

    - by Joshua Lim
    I'm trying to setup a DHCP server on Windows 7 Home, tried using a couple of freeware which I found on the Internet but none seemed to work. What I did: On the Windows 7 machine which I install the DHCP Server with the range 192.168.1.12-192.168.1.256. I set the Gigabit Ethernet adapter to a static IP address of 192.168.1.11 and subnet mask of 255.255.255.0. When I did an IP config, it showed. Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetLink (TM) Gigabit Ethernet Physical Address. . . . . . . . . : 8C-73-6E-75-A7-56 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::196d:b6bb:8f93:2555%12(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.1.11(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1 fec0:0:0:ffff::2%1 fec0:0:0:ffff::3%1 NetBIOS over Tcpip. . . . . . . . : Enabled I connected another Window 7 machine to the "DHCP server" using a cross cable and set network adapter on that machine to automatically detect IP address. The client fails to acquire the correct IP address from the DHCP server and showed the autoconfigured IPv4 address instead. Here's the information returned by config /all on the client machine. Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Atheros AR8152/8158 PCI-E Fast Ethernet Controller (NDIS 6.20) Physical Address. . . . . . . . . : 54-04-A6-40-96-4B DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::4885:4082:5572:5a85%12(Preferred) Autoconfiguration IPv4 Address. . : 169.254.90.133(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : DHCPv6 IAID . . . . . . . . . . . : 341050534 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-17-54-78-12-00-08-CA-46-4C-5A DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1 fec0:0:0:ffff::2%1 fec0:0:0:ffff::3%1 NetBIOS over Tcpip. . . . . . . . : Enabled DHCP client are running on both machines. I've tried many times but failed. Googling also returned no useful information for my scenario. Have I missed out any step? Thanks.

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >