Search Results

Search found 21063 results on 843 pages for 'stochastic process'.

Page 560/843 | < Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >

  • How do I watch a file for changes using Python?

    - by Jon Cage
    I have a log file being written by another process which I want to watch for changes. Each time a change occurrs I'd like to read the new data in to do some processing on it. What's the best way to do this? I was hoping there'd be some sort of hook from the PyWin32 library. I've found the win32file.FindNextChangeNotification function but have no idea how to ask it to watch a specific file. If anyone's done anything like this I'd be really grateful to hear how... [Edit] I should have mentioned that I was after a solution that doesn't require polling. [Edit] Curses! It seems this doesn't work over a mapped network drive. I'm guessing windows doesn't 'hear' any updates to the file the way it does on a local disk.

    Read the article

  • Modify Emdeded String in C# compiled exe

    - by nitefrog
    I have an issue where I need to be able to have a compiled exe ( .net 3.5 c# ) that I will make copies of to distribute that will need to change a key for example before the exe is sent out. I cannot compile each time a new exe is needed. This is a thin client that will be used as part of a registration process. Is it possible to add a entry to a resource file with a blank value then when a request comes in have another application grab the blank default thin client, copy it, populate the blank value with the data needed. If yes how? If no do you have any ideas? I have been scratching my head for a few days now and the limitation as due to the boundaries I am required to work in. The other idea I has was to inject the value into a method, which I have no idea how I would even attempt that. Thanks.

    Read the article

  • Real-time aggregation of files from multiple machines to one

    - by dmitry-kay
    I need a tool which gets a list of machine names and file wildcards. Then it connects to all these machines (SSH) and begins to monitor changes (appendings to the end) in each file matched by wildcards. New lines in each such file are saved to the local machine to the file with the same name. (This is a task of real-time log files collecting.) I could use ssh + tail -f, of course, but it is not very robust: if a monitoring process dies and then restarts, some data from remote files may be lost (because tail -f does not save the position at which it is finished before). I may write this tool manually, but before - I'd like to know if such tool already exists or not.

    Read the article

  • Unreachable breakpoint at execut(able/ing) code

    - by shadeMe
    I've got two DLLs, one in written in native C++ and the other in C++/CLI. The former is injected into a process, and at a later point in time, loads the latter. While debugging, I noticed that the native DLL's breakpoints were functioning correctly while the other's weren't, even though its code was being executed. The breakpoints showed this message: This breakpoint will not be hit. No executable code associated with this line. Possible causes include: preprocessor directives or compiler/linker optimizations. The modules window tells me that the plugin's symbols are loaded. I'm running with its DEBUG build. Any ideas on why this is so and perhaps a fix ?

    Read the article

  • Pipeline For Downloading and Processing Files In Unix/Linux Environment With Perl

    - by neversaint
    I have a list of files URLS where I want to download them: http://somedomain.com/foo1.gz http://somedomain.com/foo2.gz http://somedomain.com/foo3.gz What I want to do is the following for each file: Download foo1,2.. in parallel with wget and nohup. Every time it complete download process them with myscript.sh What I have is this: #! /usr/bin/perl @files = glob("foo*.gz"); foreach $file (@files) { my $downurls = "http://somedomain.com/".$file; system("nohup wget $file &"); system("./myscript.sh $file >> output.txt"); } The problem is that I can't tell the above pipeline when does the file finish downloading. So now it myscript.sh doesn't get executed properly. What's the right way to achieve this?

    Read the article

  • How to generate makefile targets from variables?

    - by Ketil
    I currently have a makefile to process some data. The makefile gets the inputs to the data processing by sourcing a CONFIG file, which defines the input data in a variable. Currently, I symlink the input files to a local directory, i.e. the makefile contains: tmp/%.txt: tmp ln -fs $(shell echo $(INPUTS) | tr ' ' '\n' | grep $(patsubst tmp/%,%,$@)) $@ This is not terribly elegant, but appears to work. Is there a better way? Basically, given INPUTS = /foo/bar.txt /zot/snarf.txt I would like to be able to have e.g. %.out: %.txt some command As well as targets to merge results depending on all $(INPUT) files. Also, apart from the kludgosity, the makefile doesn't work correctly with -j, something that is crucial for the analysis to complete in reasonable time. I guess that's a bug in GNU make, but any hints welcome.

    Read the article

  • How to assign permissions for Copy/Paste on windows

    - by jalchr
    Well, as everyone knows there is no way you can assign permissions for Copy/Paste of files on windows platform. I need to control the copy process from a central file server, in a way that helps me know: which user performed the copy Which files were copied where did he pasted them Total size of data copied Time of copy operation If user exceeds the allowed "Copy-Limit", a dialog box requests him to enter administrative credentials or deny him (as it would be configured) Store all this data in a file for later review or send by email. I need to collect this data by putting a utility program on the server itself, without any other installation on client computers. I know about monitoring the Clipboard, but which clipboard would it be? the user's clipboard or the server's clipboard ? And what about drag-drop operation, which doesn't even pass through the clipboard? Any knowledge of whether SystemFileWatcher is useful in such case ? Any ideas ?

    Read the article

  • XSLT workflow with variable number of source files

    - by chiborg
    I have a bunch of XML files with a fixed, country-based naming schema: report_en.xml, report_de.xml, report_fr.xml, etc. Now I want to write an XSLT style sheet that reads each of these files via the document() XPath function, extracts some values and generates one XML files with a summary. My question is: How can I iterate over the source files without knowing the exact names of the files I will process? At the moment I'm planning to generate an auxiliary XML file that holds all the file names and use the auxiliary XML file in my stylesheet to iterate. The the file list will be generated with a small PHP or bash script. Are there better alternatives? I am aware of XProc, but investing much time into it is not an option for me at the moment. Maybe someone can post an XProc solution. Preferably the solution includes workflow steps where the reports are downloaded as HTML and tidied up :) I will be using Saxon as my XSLT processor, so if there are Saxon-specific extensions I can use, these would also be OK.

    Read the article

  • how to pass url in mailto's body

    - by Simer
    i need to send a url of my site in body so that user can click on that to join my site. but it is coming like this in mail client: Link goes here http://www.example.com/foo.php?this=a url after & is not coming then whole process of joining failed. how can i pass url like these in mailto body http://www.example.com/foo.php?this=a&join=abc&user454 <a href="mailto:[email protected]?body=Link goes here http://www.example.com/foo.php?this=a&amp;really=long&amp;url=with&amp;lots=and&amp;lots=and&amp;lots=of&prameters=on_it ">Link text goes here</a> i have searched alot but did't got right answer thanks

    Read the article

  • .NET Code Evolution

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/07/24/153504.aspxAt my day job I do look at a lot of code written by other people. Most of the code is quite good and some is even a masterpiece. And there is also code which makes you think WTF… oh it was written by me. Hm not so bad after all. There are many excuses reasons for bad code. Most often it is time pressure followed by not enough ambition (who cares) or insufficient training. Normally I do care about code quality quite a lot which makes me a (perceived) slow worker who does write many tests and refines the code quite a lot because of the design deficiencies. Most of the deficiencies I do find by putting my design under stress while checking for invariants. It does also help a lot to step into the code with a debugger (sometimes also Windbg). I do this much more often when my tests are red. That way I do get a much better understanding what my code really does and not what I think it should be doing. This time I do want to show you how code can evolve over the years with different .NET Framework versions. Once there was  time where .NET 1.1 was new and many C++ programmers did switch over to get rid of not initialized pointers and memory leaks. There were also nice new data structures available such as the Hashtable which is fast lookup table with O(1) time complexity. All was good and much code was written since then. At 2005 a new version of the .NET Framework did arrive which did bring many new things like generics and new data structures. The “old” fashioned way of Hashtable were coming to an end and everyone used the new Dictionary<xx,xx> type instead which was type safe and faster because the object to type conversion (aka boxing) was no longer necessary. I think 95% of all Hashtables and dictionaries use string as key. Often it is convenient to ignore casing to make it easy to look up values which the user did enter. An often followed route is to convert the string to upper case before putting it into the Hashtable. Hashtable Table = new Hashtable(); void Add(string key, string value) { Table.Add(key.ToUpper(), value); } This is valid and working code but it has problems. First we can pass to the Hashtable a custom IEqualityComparer to do the string matching case insensitive. Second we can switch over to the now also old Dictionary type to become a little faster and we can keep the the original keys (not upper cased) in the dictionary. Dictionary<string, string> DictTable = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase); void AddDict(string key, string value) { DictTable.Add(key, value); } Many people do not user the other ctors of Dictionary because they do shy away from the overhead of writing their own comparer. They do not know that .NET has for strings already predefined comparers at hand which you can directly use. Today in the many core area we do use threads all over the place. Sometimes things break in subtle ways but most of the time it is sufficient to place a lock around the offender. Threading has become so mainstream that it may sound weird that in the year 2000 some guy got a huge incentive for the idea to reduce the time to process calibration data from 12 hours to 6 hours by using two threads on a dual core machine. Threading does make it easy to become faster at the expense of correctness. Correct and scalable multithreading can be arbitrarily hard to achieve depending on the problem you are trying to solve. Lets suppose we want to process millions of items with two threads and count the processed items processed by all threads. A typical beginners code might look like this: int Counter; void IJustLearnedToUseThreads() { var t1 = new Thread(ThreadWorkMethod); t1.Start(); var t2 = new Thread(ThreadWorkMethod); t2.Start(); t1.Join(); t2.Join(); if (Counter != 2 * Increments) throw new Exception("Hmm " + Counter + " != " + 2 * Increments); } const int Increments = 10 * 1000 * 1000; void ThreadWorkMethod() { for (int i = 0; i < Increments; i++) { Counter++; } } It does throw an exception with the message e.g. “Hmm 10.222.287 != 20.000.000” and does never finish. The code does fail because the assumption that Counter++ is an atomic operation is wrong. The ++ operator is just a shortcut for Counter = Counter + 1 This does involve reading the counter from a memory location into the CPU, incrementing value on the CPU and writing the new value back to the memory location. When we do look at the generated assembly code we will see only inc dword ptr [ecx+10h] which is only one instruction. Yes it is one instruction but it is not atomic. All modern CPUs have several layers of caches (L1,L2,L3) which try to hide the fact how slow actual main memory accesses are. Since cache is just another word for redundant copy it can happen that one CPU does read a value from main memory into the cache, modifies it and write it back to the main memory. The problem is that at least the L1 cache is not shared between CPUs so it can happen that one CPU does make changes to values which did change in meantime in the main memory. From the exception you can see we did increment the value 20 million times but half of the changes were lost because we did overwrite the already changed value from the other thread. This is a very common case and people do learn to protect their  data with proper locking.   void Intermediate() { var time = Stopwatch.StartNew(); Action acc = ThreadWorkMethod_Intermediate; var ar1 = acc.BeginInvoke(null, null); var ar2 = acc.BeginInvoke(null, null); ar1.AsyncWaitHandle.WaitOne(); ar2.AsyncWaitHandle.WaitOne(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Intermediate did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Intermediate() { for (int i = 0; i < Increments; i++) { lock (this) { Counter++; } } } This is better and does use the .NET Threadpool to get rid of manual thread management. It does give the expected result but it can result in deadlocks because you do lock on this. This is in general a bad idea since it can lead to deadlocks when other threads use your class instance as lock object. It is therefore recommended to create a private object as lock object to ensure that nobody else can lock your lock object. When you read more about threading you will read about lock free algorithms. They are nice and can improve performance quite a lot but you need to pay close attention to the CLR memory model. It does make quite weak guarantees in general but it can still work because your CPU architecture does give you more invariants than the CLR memory model. For a simple counter there is an easy lock free alternative present with the Interlocked class in .NET. As a general rule you should not try to write lock free algos since most likely you will fail to get it right on all CPU architectures. void Experienced() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); t1.Wait(); t2.Wait(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Experienced did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Experienced() { for (int i = 0; i < Increments; i++) { Interlocked.Increment(ref Counter); } } Since time does move forward we do not use threads explicitly anymore but the much nicer Task abstraction which was introduced with .NET 4 at 2010. It is educational to look at the generated assembly code. The Interlocked.Increment method must be called which does wondrous things right? Lets see: lock inc dword ptr [eax] The first thing to note that there is no method call at all. Why? Because the JIT compiler does know very well about CPU intrinsic functions. Atomic operations which do lock the memory bus to prevent other processors to read stale values are such things. Second: This is the same increment call prefixed with a lock instruction. The only reason for the existence of the Interlocked class is that the JIT compiler can compile it to the matching CPU intrinsic functions which can not only increment by one but can also do an add, exchange and a combined compare and exchange operation. But be warned that the correct usage of its methods can be tricky. If you try to be clever and look a the generated IL code and try to reason about its efficiency you will fail. Only the generated machine code counts. Is this the best code we can write? Perhaps. It is nice and clean. But can we make it any faster? Lets see how good we are doing currently. Level Time in s IJustLearnedToUseThreads Flawed Code Intermediate 1,5 (lock) Experienced 0,3 (Interlocked.Increment) Master 0,1 (1,0 for int[2]) That lock free thing is really a nice thing. But if you read more about CPU cache, cache coherency, false sharing you can do even better. int[] Counters = new int[12]; // Cache line size is 64 bytes on my machine with an 8 way associative cache try for yourself e.g. 64 on more modern CPUs void Master() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Master, 0); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Master, Counters.Length - 1); t1.Wait(); t2.Wait(); Counter = Counters[0] + Counters[Counters.Length - 1]; if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Master did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Master(object number) { int index = (int) number; for (int i = 0; i < Increments; i++) { Counters[index]++; } } The key insight here is to use for each core its own value. But if you simply use simply an integer array of two items, one for each core and add the items at the end you will be much slower than the lock free version (factor 3). Each CPU core has its own cache line size which is something in the range of 16-256 bytes. When you do access a value from one location the CPU does not only fetch one value from main memory but a complete cache line (e.g. 16 bytes). This means that you do not pay for the next 15 bytes when you access them. This can lead to dramatic performance improvements and non obvious code which is faster although it does have many more memory reads than another algorithm. So what have we done here? We have started with correct code but it was lacking knowledge how to use the .NET Base Class Libraries optimally. Then we did try to get fancy and used threads for the first time and failed. Our next try was better but it still had non obvious issues (lock object exposed to the outside). Knowledge has increased further and we have found a lock free version of our counter which is a nice and clean way which is a perfectly valid solution. The last example is only here to show you how you can get most out of threading by paying close attention to your used data structures and CPU cache coherency. Although we are working in a virtual execution environment in a high level language with automatic memory management it does pay off to know the details down to the assembly level. Only if you continue to learn and to dig deeper you can come up with solutions no one else was even considering. I have studied particle physics which does help at the digging deeper part. Have you ever tried to solve Quantum Chromodynamics equations? Compared to that the rest must be easy ;-). Although I am no longer working in the Science field I take pride in discovering non obvious things. This can be a very hard to find bug or a new way to restructure data to make something 10 times faster. Now I need to get some sleep ….

    Read the article

  • Error in MySQL Workbench Forward Engineer Stored Procedures

    - by colithium
    I am using MySQL Workbench (5.1.18 OSS rev 4456) to forward engineer a SQL CREATE script. For every stored procedure, the automatic process outputs something like: DELIMITER // USE DB_Name// DB_Name// DROP procedure IF EXISTS `DB_Name`.`SP_Name` // USE DB_Name// DB_Name// CREATE PROCEDURE `DB_Name`.`SP_Name` (id INT) BEGIN SELECT * FROM Table_Name WHERE Id = id; END// The two lines that are simply the database name followed by the delimiter are errors and are reported as such when running the script. As long as they are ignored, it looks like everything gets created just fine. But why would it add those lines? I am creating the database in the WAMP environment which uses MySQL 5.1.36

    Read the article

  • file upload in existing site with jquery and codeigniter

    - by Sam
    I am a new bee to ajax world. I am working on a site which uses jquery and codeigniter which processes big files like 2GB. It basically parses file and stores some extracted data from it, it uses ajax to show how far the files have been processed etc. Now I want to change the way we process files. I want to first store the file on server side and then start the processing. I evaluated upload class of codeigniter but looks like I cannot use it for this purpose as this class works with field_name and I could not find a way to make an ajax call to the upload class. My questions is : What would be suitable for my problem? Thanks in advance, Sam

    Read the article

  • Unregistering a COM wrapped .NET assembly

    - by flopdix
    I created a COM exposed .NET 2.0 dll and registered with my windows operating system using 'regasm' and 'gacutil' (in GAC). Then I tried to call this component from my classic ASP page. It worked fine. But, I need to change the functionality of my assembly. I unregistered it again using 'regasm' and 'gacutil' (from GAC) utilities and copied my new .NET DLL and registered again (This time using a new version of the DLL). For some reason, i still have a pointer to my old assembly and new calls to the DLL from ASP page are not working. Any Ideas on what process i need to follow to ensure that all the references to the old version are completely removed? I appreciate any help.

    Read the article

  • Install .exe software application on remote machines.

    - by coral_reef
    Hi, I modified this script from the net, which is suppose to install .exe applications for remote machines: $m = Read-Host "Enter machine name" $File = "c:\temp\office2007sp2-kb958194-fullfile-en-us.exe" $product = [WMICLASS]"\$m\ROOT\CIMV2:win32_Process" $product.Create($File) When I run this script, I have noticed that this program promptly creates a process in the remote machine with the application name office2007sp2-kb958194-fullfile-en-us.exe. This can be checked in the task manager also. But other than that, there is no way to find out if this is getting installed in the remote machine or not. Is there a way to find out, if the installation is really happening? Or does this script actually works? Any help will be great! Reagrds Arindam

    Read the article

  • innovation for technical high school

    - by gnuze
    I work in a high school in Italy. Our goal is forming computer programmers in 5 years. Nowaday, we teach vb.net on Win ( desktop applications using ADO on Access ), C on linux ( process, threads ) , C++ on Linux ( sockets TCP/UDP with UML ), and a bit of ASP.net, flash programming, PHP, Joomla and PIC Microcontrollers. We are looking for something innovative to add in our programs of study, but every teacher have a different point of view: we are debating about python, C#, Arduino, Silverlight and smartphones programming. Any suggestions? Tx in advance.

    Read the article

  • how do redirect values to other page without click event in html. Below code is fine IE. But Not in

    - by karthik
    I have implemented paypal in my web page. Process is 'given inputs are redirect to other page(2 nd page) which have to get that input and redirect to paypal page(third page). Here we submit data on first page. value pass to second page(in this page user interaction not allowed) after pass to third page.It works fine in IE . But Not In Mozila.Send any Solution. Code sample(second page): <%string product = Request.QueryString["productName"].ToString();% <% string amount = Request.QueryString["price"].ToString(); % " " document.all.frmpaypal.submit(); Fine in IE, Not In Mozila

    Read the article

  • trying to use code_swarm but Im having some python scripting problems

    - by theprojectabot
    I am having issues running this: link-mbp:codeswarm-0.1 benb$ python convert_logs/convert_logs.py -perforce-path Traceback (most recent call last): File “convert_logs/convert_logs.py”, line 408, in main() File “convert_logs/convert_logs.py”, line 350, in main files = run_marshal(’p4 -G describe -s “‘ + changelist['change'] + ‘”‘) KeyError: ‘change’ link-mbp:codeswarm-0.1 benb$ I am trying to use code_swarm from this link http://blog.perforce.com/blog/?p=780&cpage=1#comment-965 to visualize my codebase changes. if I run p4 changes everything shows correct but the code in this python script doesnt seem to process correctly... if I run p4 describe on a a changelist number it correctly reports ideas?

    Read the article

  • ado.net slow updating large tables

    - by brett
    The problem: 100,000+ name & address records in an access table (2003). Need to iterate through the table & update detail with the output from a 3rd party dll. I currently use ado, and it works at an acceptable speed (less than 5 minutes on a network share). We will soon need to update to access 2007 and its 'non jet' accdb format to maintain compatability with clients. I've tried using ado.net datsets, but updating the records takes hours! We process 5-10 of these tables per day - so this cannot be a solution. Any ideas on the fastest way to update individual records using ado.net? Surely we didn't take such a hugh backward step with ado.net? Any help would be appreciated.

    Read the article

  • How can I check the version of an assembly then delete the assembly?

    - by Nescio
    I am using the FileVersionInfo to retrieve the version of a .Net assembly. Then, I want to immediately delete the file. Unfortunately after I call GetVersionInfo, any attempt to delete the file results in an error “…in use by another process…” Is there another technique to determine the version that does not lock the file? Or, is it possible to ensure the lock is released after calling GetVersionInfo? The below example is heavily simplified, but scope matches my real code. void Main() { var fvi = GetVersion("myPath"); if (fvi.ToString() == "2.0.0.7") DeleteFile("myPath"); } FileVersionInfo GetVersion(string path) { return FileVersionInfo.GetVersionInfo(path); } void DeleteFile(string path) { File.Delete(path); }

    Read the article

  • ASP.NET cached aspx page & IIS logs

    - by Vishal Seth
    Hi guys, Is there any way to find out if ASP.Net runtime has served a cached copy of ASPX page or actually went through the page life cycle? Here is my problem: I'm seeing many entries in my IIS log files that were served successfully (200 OK). I've a corresponding logging code (Log4Net API) in the Session_Start and Application_BeginRequest() events that is logging every request to my DB with more details. I'm not seeing any corresponding entries in my SQL DB for some cases that should have been created by Log4Net code. Are there any logs available to find out if a cached copy was served by .NET worker process? Moreover, if my logging code would throw an exception, won't that show up as 500 in IIS logs? The code is on Windows 2008 Server, IIS 7.

    Read the article

  • WiX, how to prevent files from uninstalling though we forgot to set Permanent="yes"

    - by Doc Brown
    We have a product installer created with Wix, containing a program package ("V1") and some configuration files. Now, we are going to make a major upgrade with a new product code, where the old version of the product is uninstalled and "V2" is installed. What we want is to save one of the configuration files from uninstalling, since it is needed for the V2, too. Unfortunately, we forgot to set the Permanent="yes" option when we delivered V1 (read this question for more information). Here comes the question: is there an easy way of preventing the uninstall of the file anyhow? Of course, we could add a custom action to the script to backup the file before uninstallation, and another custom action to restore it afterwards, but IMHO that seems to be overkill for this task, and might interfere with other parts of the MSI registration process.

    Read the article

  • App Engine HTTP 500s

    - by pocoa
    This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. This request may thus take longer and use more CPU than a typical request for your application. I've handled all the situations, also DeadlineExceededError too. But sometimes I see these error messages in error logs. That request took about 10k ms, so it's not exceeded the limit too. But there is no other specific message about this error. All I know is that it returned HTTP 500. Is there anyone know the reason of these error messages? Thank you.

    Read the article

  • Pushwoosh SDK Notification Issue

    - by dvb
    I have integrated the Pushwoosh SDK in my Android Application and it is working fine. I have done the Cross Platform Setting to run the Android Application on my Blackberry Z10 device, the application is running finely but I am not able to receive the Notification on my Blackberry Z10 device as Android. I am getting this error: 06-06 14:34:21.314: I/QNXNavigatorClient(8708260): Already active: com.packagename.pushdemo 06-06 14:34:21.662: I/QNXNavigatorClient(8708260): PackagesOpenedRunnable: [com.packagename.pushdemo] 06-06 14:34:21.662: I/QNXNavigatorClient(8708260): Shell com.packagename.pushdemo cannot join group, group was already joined 06-06 14:34:21.668: I/ActivityManager(8708260): Displayed com.packagename.pushdemo/.MainActivity: +347ms 06-06 14:34:24.118: E/QNXShrimpClient(8708260): com.google.android.c2dm.intent.REGISTER error(10108) = "" 06-06 14:34:24.124: W/ContextImpl(8708260): Calling a method in the system process without a qualified user: android.app.ContextImpl.sendOrderedBroadcast:1061 com.android.server.QNXShrimpClient.onRegisterComplete:162 com.qnx.service.pps.shrimp.ShrimpController.onMessageReceived:128 com.qnx.service.pps.PPSObject.processMessage:292 com.qnx.service.pps.PPSObject.access$500:11 I am not able to get the Push Notification on my Blackberry Z10 device.

    Read the article

  • Spring 3 - Custom Security

    - by Eqbal
    I am in the process of converting a legacy application from proprietary technology to a Spring based web app, leaving the backend system as is. The login service is provided by the backend system through a function call that takes in some parameter (username, password plus some others) and provides an output that includes the authroizations for the user and other properties like firstname, lastname etc. What do I need to do to weave this into Spring 3.0 security module. Looks like I need to provide a custom AuthenticationProvider implementation (is this where I call the backend function?). Do I also need a custom User and UserDetailsService implementation which needs loadUserByName(String userName)? Any pointers on good documentation for this? The reference that came with the download is okay, but doesn't help too much in terms of implementing custom security.

    Read the article

  • sqlcipher command line not working

    - by Min Lin
    I have a encrypted sqlite db and its key. (Which is generated by an android program). However, when I open the db in command line I can not read the db. The command line tool is installed by: brew install sqlcipher I open the database by: sqlcipher EnDB.db >pragma key="6b74fcd"; >select * from bizinfo; It keeps telling me "Error: file is encrypted or is not a database" However, if I open the database file with gui app sqlite database browser (which is a windows program and I run it in wine). It pops up a window for me to enter the key, with 6b74fcd as the key it successfully read the database. As I want to automatically process the db in the future, I can not depend on the GUI. Do you know why the command line is not working?

    Read the article

< Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >