Search Results

Search found 19499 results on 780 pages for 'transaction log'.

Page 196/780 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Why does this crash?

    - by Adam Driscoll
    I've been banging my head...I can't pretend to be a C++ guy... TCHAR * pszUserName = userName.GetBuffer(); SID sid; SecureZeroMemory(&sid, sizeof(sid)); SID_NAME_USE sidNameUse; DWORD cbSid = sizeof(sid); pLog->Log(_T("Getting the SID for user [%s]"), 1, userName); if (!LookupAccountName(NULL, (LPSTR)pszUserName, &sid, &cbSid, NULL, 0, &sidNameUse)) { pLog->Log(_T("Failed to look up user SID. Error code: %d"),1, GetLastError()); return _T(""); } pLog->Log(_T("Converting binary SID to string SID")); The message 'Getting the SID for user [x] is written' but then the app crashes. I'm assuming is was the LookupAccountName call. EDIT: Whoops userName is a MFC CString

    Read the article

  • logrotate compress files after the postrotate script

    - by Thomas
    I have an application generating a really heavy big log file every days (~800MB a day), thus I need to compress them but since the compression takes time, I want that logrotate compress the file after reloading/sending HUP signal to the application. /var/log/myapp.log { rotate 7 size 500M compress weekly postrotate /bin/kill -HUP `cat /var/run/myapp.pid 2>/dev/null` 2>/dev/null || true endscript } Is it already the case that the compression takes place after the postrotate (which would be counter-intuitive)? If not Can anyone tell me if it's possible to do that without an extra command script (an option or some trick)? Thanks Thomas

    Read the article

  • WordPress MU: Login from main page but not individual blogs

    - by bradrhine
    I recently upgraded to WPMU 2.8.6 and ever since, my users can't log in on their individual blogs, but they can log in from the main page. My site is at blogs.mtwp.net (we're a school district). So if a user goes to blogs.mtwp.net/BLOGNAME/wp-login.php, their password is rejected. If they go toblogs.mtwp.net/wp-login.php, they can log in and get to the dashboard from there. But it's not all users. Site admins can get in just fine. We're using wpDirAuth 1.4 if that makes a difference. Honestly, I'm stumped. Any help would be very much appreciated. Thanks!

    Read the article

  • Move Data into the Grid for Scalable, Predictable Response Times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. Still in limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.php, or, send your questions to [email protected]. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Coherence,cloudtran,cache,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Efficient, partial, point-in-time database restores

    - by GavinPayneUK
    This article is about a situation that many of us could describe the theoretical approach to solving, but then struggle to understand why SQL Server wasn’t following that theoretical approach when you tried it for real. Earlier this week, I had a client ask about the best way to perform: a partial database restore, 1 of 1300 filegroups; to a specific point in time; using a differential backup, and therefore; without restoring each transaction log backup taken since the full backup. The last point...(read more)

    Read the article

  • Is there a better way than #if DebugMode for logging

    - by Daniel
    I'm making a c++ library thats going to be P/Invoked from c#, so i am unable to breakpoint/debug the c++ side of things. So i decided to add logging so i can see if anything goes wrong and where it happens. I add a #define DebugMode 1 in order to determine if i am to log or not. First of all i'm not very good at c++ but i know enough to get around. So my questions are: Is there a better way than wrapping #if DebugMode #endifs around every Log call? I could simply do that inside the method and just return if logging isn't disabled but won't that mean then all those logging strings will be in the assembly? How can i emulate what printf does with its "..." operator enabling me to pass something like Log("Variable x is {0}", x); Thanks!

    Read the article

  • Tomcat: recommandations for logging

    - by WizardOfOdds
    I've read several questions here concerning Tomcat and logging but I still really don't understand the "bigger picture", hence my question: How and where are my Webapps supposed to do their logging? By default on my setup Tomcat 6.0.20 logs go in the following file/appender: ./apache-tomcat-6.0.20/logs/catalina.out Am I suppose to have my webapps also log to this file/appender? Let say my case is trivially simple and I've got just one servlet: import ... // What do I import here in order to be able to log? public class SOServlet extends HttpServlet { public void doGet( final HttpServletRequest request, final HttpServletResponse response ) throws IOException, ServletException { ... // I want to log here, what do I write? What are the gotchas knowing that there are more than one webapp running on the same Tomcat? (apparently from reading the various questions there are many gotchas). What about the .war, do I need to put log4j/sl4f/commons-logging/whatever in my .war?

    Read the article

  • Who Are the BI Users in Your Neighborhood?

    - by [email protected]
    By Brian Dayton on March 19, 2010 10:52 PM Forrester's Boris Evelson recently wrote a blog titled "Who are the BI Personas?" that I enjoyed for a number of reasons. It's a quick read, easy to grasp and (refreshingly) focuses on the users of technology VS the technology. As Evelson admits, he meant to keep the reference chart at a high-level because there are too many different permutations and additional sub-categories to make such a chart useful. For me, I wouldn't head into the technical permutations but more the contextual use of BI and the issues that users experience. My thoughts brought up more questions than answers such as: Context: - HOW: With the exception of the "Power User" persona--likely some sort of business or operations analyst? - WHEN: Are they using the information to make real-time decisions on the front lines (a customer service manager or shipping/logistics VP) or are they using this information for cumulative analysis and business planning? Or both? - WHERE: What areas of the business are more or less likely to rely on BI across an organization? Human Resources, Operations, Facilities, Finance--- and why are some more prone to use data-driven analysis than others? Issues: - DELAYS & DRAG ON IT?: One of the persona characteristics Evelson calls out is a reliance on IT. Every persona except for the "Power User" has a heavy reliance on IT for support. What business issues or delays does that cause to users? What is the drag on IT resources who could potentially be creating instead of reporting? - HOW MANY CLICKS: If BI is being used within the context of a transaction (sales manager looking for upsell opportunities as an example) is that person getting the information within the context of that action or transaction? Or are they minimizing screens, logging into another application or reporting tool, running queries, etc.? Who are the BI Users in your neighborhood or line of business? Do Evelson's personas resonate--and do the tools that he calls out (he refers to it as "BI Style") resonate with what your personas have or need? Finally, I'm very interested if BI use is viewed as a bolt-on...or an integrated part of your daily enterprise processes?

    Read the article

  • Can not access android /data folder?

    - by fonter
    try { Runtime rt = Runtime.getRuntime(); Process pcs = rt.exec("ls -l /data"); BufferedReader br = new BufferedReader(new InputStreamReader(pcs .getInputStream())); String line = null; while ((line = br.readLine()) != null) { Log.e("line","line="+line); } br.close(); pcs.waitFor(); int ret = pcs.exitValue(); Log.e("ret","ret="+ret); } catch (Exception e) { Log.e("Exception", "Exception", e); } only print "ret=0",How to print the correct path?

    Read the article

  • PeopleSoft New Design Solves Navigation Problem

    - by Applications User Experience
    Anna Budovsky, User Experience Principal Designer, Applications User Experience In PeopleSoft we strive to improve User Experience on all levels. Simplifying navigation and streamlining access to the most important pages is always an important goal. No one likes to waste time waiting for pages to load and watching a spinning glass going on and on. Those performance-affecting server trips, page-load waits and just-too-many clicks were complained about for a long time. Something had to be done. A few new designs came in PeopleSoft 9.2 helping users to access their everyday work areas easier and faster. For example, Dashboard and Work Center aggregate most accessed information sections on a single page; Related Information allows users to complete transaction-related-research without interrupting a transaction and Secure Search gets users to a specific page directly. Today we’ll talk about the Actions menu. Most PeopleSoft pages are shared between individual products and product lines. It means changing the content on a single page involves Oracle development and quality assurance time for making and testing the changes. In order to streamline the navigation and cut down on accessing PeopleSoft pages one-page-at-a-time, we introduced a new menu design. The new menu allows accessing shared pages without the Oracle development team making any local changes, and it works as an additional one-click-path to specific high-traffic actionable pages. Let’s look at how many steps it took to Change Salary for an employee in HCM 9.1 before: Figure 1. BEFORE: The 6 steps a user would take to Change Salary in PeopleSoft HCM 9.1 In PeopleSoft 9.1 it took 5 steps + page loading time + additional verification time for making sure a correct employee is selected from the table. In PeopleSoft 9.2 it only takes 2 steps. To complete Ad Hoc Change Salary action, the user can start from the HCM Manager's Dashboard, click the Action menu within a table, choose a menu option, and access a correct employee’s details page to take an action. Figure 2. AFTER: The 2 steps a user would take to Change Salary in PeopleSoft HCM 9.2 The new menu is placed on a row level which ensures the user accesses the correct employee’s details page. The Actions menu separates menu options into hierarchical sections which help to scan and access the correct option quickly. The new menu’s small size and its structure enabled users to access high-traffic pages from any page and from any part of the page. No more spinning hourglass, no more multiple pages upload. The flexible design fits anywhere on a page and provides a fast and reliable path to the correct destination within the product. Now users can: Access any target page no matter how far it is buried from the starting point; Reduce navigation and page-load time; Improve productivity and reduce errors. The new menu design is available and widely used in all PeopleSoft 9.2 product lines.

    Read the article

  • ruby on rails adding new route

    - by ohana
    i have an RoR application Log, which similar to the book store app, my logs_controller has all default action: index, show, update, create, delete.. now i need to add new action :toCSV, i defined it in logs_controller, and add new route in the config/routes as: map.resources :logs, :collection = { :toCSV = :get }. from irb, i checked the routes and see the new routes added already: rs = ActionController::Routing::Routes puts rs.routes GET /logs/toCSV(.:format)? {:controller="logs", :action="toCSV"} then ran ‘rake routes’ command in shell, it returned: toCSV_logs GET /logs/toCSV(.:format) {:controller="logs", :action="toCSV"} everything seems working. finally in my views code, i added the following: link_to 'Export to CSV', toCSV_logs_path when access it in the brower 'http://localhost:3000/logs/toCSV', it complained: Couldn't find Log with ID=toCSV i checked in script/server, and saw this one: ActiveRecord::RecordNotFound (Couldn't find Log with ID=toCSV): app/controllers/logs_controller.rb:290:in `show' seems when i click that link, it direct it to the action 'show' instead of 'toCSV', thus it took 'toCSV' as an id...anyone know why would this happen? and to fix it? Thanks...

    Read the article

  • Google attaque le FISC américain pour avoir trop payé d'impôts en 1984, il réclame le remboursement de plus de 80 millions de dollars de taxes

    Google attaque le FISC américain, il aurait payé trop d'impôts en 2004 Et lui réclame plus de 80 millions de dollarsGoogle vient d'entamer une procédure contre l'U.S. Internal Revenue Service, l'équivalent du FISC, pour récupérer 83.5 millions de dollars qui, d'après le géant d'Internet, lui seraient dûs.Le litige porte sur une opération boursière concernant des warrants (des bons de souscription à fort effet de levier, souvent qualifiés de spéculatifs) lors d'une transaction avec AOL.Les warrants sont des options d'achat - ou de vente - d'un produit sous-jacent (ici des actions de Google) qui permettent à leurs détenteurs (ici AOL) d'acheter - ou de vendre - ce sous-jacent à un prix fixe détermi...

    Read the article

  • How to disable log4j logging from Java code

    - by Erel Segal Halevi
    I use a legacy library that writes logs using log4j. My default log4j.properties file directs the log to the console, but in some specific functions of my main program, I would like to disable logging altogether (from all classes). I tried this: Logger.getLogger(BasicImplementation.class.getName()).setLevel(Level.OFF); where "BasicImplementation" is one of the main classes that does logging, but it didn't work - the logs are still written to the console. Here is my log4j.properties: log4j.rootLogger=warn, stdout log4j.logger.ac.biu.nlp.nlp.engineml=info, logfile log4j.logger.org.BIU.utils.logging.ExperimentLogger=warn log4j.appender.stdout = org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout = org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern = %-5p %d{HH:mm:ss} [%t]: %m%n log4j.appender.logfile = ac.biu.nlp.nlp.log.BackupOlderFileAppender log4j.appender.logfile.append=false log4j.appender.logfile.layout = org.apache.log4j.PatternLayout log4j.appender.logfile.layout.ConversionPattern = %-5p %d{HH:mm:ss} [%t]: %m%n log4j.appender.logfile.File = logfile.log

    Read the article

  • Help with calling a secure (SSL) webservice in Android.

    - by mmattax
    I'm new to Android and am struggling to make a call to an SSL web service for an Android Application. My code is as follows: Log.v("fs", "Making HTTP call..."); HttpClient http = new DefaultHttpClient(); HttpGet request = new HttpGet("https://example.com/api"); try { String response = http.execute(request, new BasicResponseHandler()); Log.v("fs", response); } catch (Exception e) { Log.v("fs", e.toString()); } The Output is: Making HTTP call... javax.net.SSLPeerUnverifiedException: No peer certificate Any suggestions to make this work would be great. EDIT I should note that this is a valid cert. It is not self-signed.

    Read the article

  • IIS6 + PHP + FastCGI 500 Errors - Where do I start looking?

    - by Bertvan
    I've set up IIS6 with FastCGI to use php-cgi.exe. I have some php websites by external parties, that I'm trying to run in a test environment. One of the websites just plain gives me a FastCGI Error Page. Is there some way to enable logging somewhere so that I can get a bit more information on this problem? I have looked in - Eventlog - IIS Website log (c:\windows\system32\Logfiles) - PHP log But no results, except the IIS Website log mentions a return of a 500 page. Is there any other way to debug/check where things might be going wrong? Here is what the page looks like: FastCGI Error The FastCGI Handler was unable to process the request. Error Details: The FastCGI process exited unexpectedly Error Number: -1073741571 (0xc00000fd). Error Description: Unknown Error HTTP Error 500 - Server Error. Internet Information Services (IIS)

    Read the article

  • How to skip extra lines before the header of a tab delimited delimited file in R

    - by Michael Dunn
    The software I am using produces log files with a variable number of lines of summary information followed by lots of tab delimited data. I am trying to write a function that will read the data from these log files into a data frame ignoring the summary information. The summary information never contains a tab, so the following function works: read.parameters <- function(file.name, ...){ lines <- scan("tmp.log", what="character", sep="\n") first.line <- min(grep("\\t", lines)) return(read.delim(file.name, skip=first.line-1, ...)) } However, these logfiles are quite big, and so reading the file twice is very slow. Surely there is a better way?

    Read the article

  • Can't get values from Sqlite DB using query

    - by Sana Joseph
    I used sqlite to populate a DB with some Tables in it. I made a function at another javascript page that executes the database & selects some values from the table. Javascript: function GetSubjectsFromDB() { tx.executeSql('SELECT * FROM Subjects', [], queryNSuccess, errorCB); } function queryNSuccess(tx, results) { alert("Query Success"); console.log("Returned rows = " + results.rows.length); if (!results.rowsAffected) { console.log('No rows affected!'); return false; } console.log("Last inserted row ID = " + results.insertId); } function errorCB(err) { alert("Error processing SQL: "+err.code); } Is there some problem with this line ? tx.executeSql('SELECT * FROM Subjects', [], queryNSuccess, errorCB); The queryNSuccess isn't called, neither is the errorCB so I don't know what's wrong.

    Read the article

  • Stairway to SQLCLR Level 3: Security (General and SAFE Assemblies)

    In the third level of our Stairway to SQLCLR, we look at the various mechanisms in place to help us control Security. In this Level we will focus on SAFE mode and see how secure SQLCLR is by default. Free eBook - Performance Tuning with DMVsThis free eBook provides you with the core techniques and scripts to monitor your query execution, index usage, session and transaction activity, disk IO, and more. Download the free eBook.

    Read the article

  • Simulating an identity column within an insert trigger

    - by William Jens
    I have a table for logging that needs a log ID but I can't use an identity column because the log ID is part of a combo key. create table StuffLogs { StuffID int LogID int Note varchar(255) } There is a combo key for StuffID & LogID. I want to build an insert trigger that computes the next LogID when inserting log records. I can do it for one record at a time (see below to see how LogID is computed), but that's not really effective, and I'm hoping there's a way to do this without cursors. select @NextLogID = isnull(max(LogID),0)+1 from StuffLogs where StuffID = (select StuffID from inserted) The net result should allow me to insert any number of records into StuffLogs with the LogID column auto computed. StuffID LogID Note 123 1 foo 123 2 bar 456 1 boo 789 1 hoo Inserting another record using StuffID: 123, Note: bop will result in the following record: StuffID LogID Note 123 3 bop

    Read the article

  • Using JSF, JPA and DAO. Without Spring?

    - by ich-bin-drin
    Hi, till now i still worked with JSF and JPA without DAOs. Now i'd like to use DAOs. But how can i initialize the EntityManager in the DAO-Classes? public class AdresseHome { @PersistenceContext private EntityManager entityManager; public void persist(Adresse transientInstance) { log.debug("persisting Adresse instance"); try { entityManager.persist(transientInstance); log.debug("persist successful"); } catch (RuntimeException re) { log.error("persist failed", re); throw re; } } } Have I to use Spring or is there a solution that works without Spring? Thanks.

    Read the article

  • iPhone in-app purchasing for Ecommerce [closed]

    - by Kyle B.
    This may not be the appropriate location for this, but would like to ask in the hopes an iOS developer with familiarity on the rules and regulations could comment. I would like to develop an iOS app that performs Ecommerce transactions. If I roll my own payment processor, and checkout process: 1) Is this allowed by Apple's rules, and 2) Would I be required to remit 30% of the transaction sale to Apple?

    Read the article

  • Resize the /var directory in redhat enterprise edition 4

    - by Sri
    I am running NDB mysql. the log files fills up the /var directory. therefore i cant start the ndbd service now. as a temporary fix, i have deleted the log files and again working fine. but again the log files fill up the /var directory. i got plenty of space in other partition. therefore i would like to swap the partition from one directory to /var. here if my input from df -h Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 ext3 54G 2.9G 49G 6% / /dev/cciss/c0d0p1 ext3 99M 14M 81M 14% /boot none tmpfs 1013M 0 1013M 0% /dev/shm /dev/cciss/c0d0p2 ext3 9.7G 9.7G 0 100% /var there are plenty of space in /dev/mapper/VolGroup00-LogVol00. Therefore i will like to swap 10 G space from this directory to /var. could you please help me out to solve this problem?

    Read the article

  • Is there a SaaS for logging user activity?

    - by JoshL
    In almost every app that I build I create some kind of user log table to log various activities that my actual USERS (not visitors, but someone with an account) perform on the site. This is primarily used for customer service issues to allow me to pull up a record of the pages and actions that a user has visited. The downside to this is the size of the UserLogs table. It gets immense. I'm not sure if it is common practice or not for others to log INDIVIDUAL (not aggregate like Google Analytics) user behavior to a database, but if it is I'm wondering if any form of a SaaS exists to help offload this task? I essentially need a RESTful API that lets me store and retrieve individual user activity quickly and securely. Anyone know of any or am I the only one who has this issue?

    Read the article

  • Restart logging to a new file (Python)

    - by compie
    I'm using the following code to initialize logging in my application. logger = logging.getLogger() logger.setLevel(logging.DEBUG) # log to a file directory = '/reserved/DYPE/logfiles' now = datetime.now().strftime("%Y%m%d_%H%M%S") filename = os.path.join(directory, 'dype_%s.log' % now) file_handler = logging.FileHandler(filename) file_handler.setLevel(logging.DEBUG) formatter = logging.Formatter("%(asctime)s %(filename)s, %(lineno)d, %(funcName)s: %(message)s") file_handler.setFormatter(formatter) logger.addHandler(file_handler) # log to the console console_handler = logging.StreamHandler() level = logging.INFO console_handler.setLevel(level) logger.addHandler(console_handler) logging.debug('logging initialized') How can I close the current logging file and restart logging to a new file? Note: I don't want to use RotatingFileHandler, because I want full control over all the filenames and the moment of rotation.

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >