Search Results

Search found 19499 results on 780 pages for 'transaction log'.

Page 284/780 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • DataNucleus Enhancer flakey?

    - by KevMo
    I'm creating a GWT app in Google App Engine, and using Google data store. Does anybody else have the problem of the DataNucleus being flakey as all get out? I can save a class, and DataNucleus will do it's thing just fine. If I change ANYTHING in the class (even adding whitespace) and then save, I get the following error: DataNucleus Enhancer completed with success for 0 classes. Timings : input=37 ms, enhance=0 ms, total=37 ms. Consult the log for full details DataNucleus Enhancer completed and no classes were enhanced. Consult the log for full details Once I clean my project, DataNucleus is happy again. Is this common when using eclipse? Is there a workaround?

    Read the article

  • Bug in Mathematica's Integrate with PrincipalValue->True

    - by Janus
    It seems that Mathematica's handling of principal value integrals fails on some corner cases. Consider these two expressions (which should give the same result): Integrate[UnitBox[x]/(x0 - x), {x, -Infinity, Infinity}, PrincipalValue -> True, Assumptions -> {x0 > 0}] /. x0 -> 1 // Simplify Integrate[UnitBox[x]/(x0 - x) /. x0 -> 1, {x, -Infinity, Infinity}, PrincipalValue -> True] In Mathematica 7.0.0 I get I Pi+Log[3] Log[3] Has this been fixed in later versions? Does anybody have an idea for a (more or less) general workaround?

    Read the article

  • SQL Query to delete oldest rows over a certain row count?

    - by Casey
    I have a table that contains log entries for a program I'm writing. I'm looking for ideas on an SQL query (I'm using SQL Server Express 2005) that will keep the newest X number of records, and delete the rest. I have a datetime column that is a timestamp for the log entry. I figure something like the following would work, but I'm not sure of the performance with the IN clause for larger numbers of records. Performance isn't critical, but I might as well do the best I can the first time. DELETE FROM MyTable WHERE PrimaryKey NOT IN (SELECT TOP 10,000 PrimaryKey FROM MyTable ORDER BY TimeStamp DESC)

    Read the article

  • Instant Messenger: How does gtalk/yahoo messenger populate the contact list?

    - by Owen
    Hi All, We are currently working on a small IM project which pretty much works like gtalk and yahoo messenger. We came across a problem that puzzled us how gtalk/ym populate their contact lists. Given that the user has let's say more or less 500 contacts, both IMs seem to readily load the contacts pretty fast and already sorted. Here are my questions(referring to either): Does it cache its contacts, like saving it in a file somewhere upon exit so that upon log-in it readily extracts the contacts and displays it in its contact list? Does it always request for the VCARDS upon log in? OR they have a VCARD push or whatever that simply updates the contacts' profiles (like that of their status [presence push - available, busy, etc...] )?

    Read the article

  • Renaming Functions during runtime in PHP.

    - by The Rook
    In PHP 5.3 is there a way to rename a function or "hook" a function. There is the rename_function() within "APD" which has been broken since ~2004. If you try and build it on PHP 5.3 you'll get this error: 'struct _zend_compiler_globals' has no member named 'extended_info' This is a really easy error to fix, just change this line: GC(extended_info) = 1; to CG(compiler_options) |= ZEND_COMPILE_EXTENDED_INFO; I modified my php.ini and the APD shows up in my phpinfo() as it should. However when i call rename_function() the PHP page doesn't load and I get a segmentation fault in my /var/log/apache2/error.log. Is there anyway to fix APD to work with a modern version of PHP? Or is there another method to rename functions? Why on earth is vital feature not in php!??!?! (Gotta love python :)

    Read the article

  • Authlogic auto login fails on registration with STI User model

    - by Wei Gan
    Authlogin by default is supposed to auto login when the user's persistence token changes. It seems to fail in my Rails app. I set up the following single table inheritance user model hierarchy: class BaseUser < ActiveRecord::Base end class User < BaseUser acts_as_authentic end create_table "base_users", :force => true do |t| t.string "email" t.string "crypted_password" t.string "persistence_token" t.string "first_name" t.string "last_name" t.datetime "created_at" t.datetime "updated_at" t.string "type" end To get auto login to work, I need to explicitly log users in in my UsersController: def create @user = User.new(params[:user]) if @user.save UserSession.create(@user) # EXPLICITLY LOG USER IN BY CREATING SESSION flash[:notice] = "Welcome to Askapade!" redirect_to_target_or_default root_url else render :action => :new end end I was wondering if it's anything to do with STI, or that the table is named "base_users" and not "users". I set it up before without STI and it worked so I'm wondering why once I put in place this hierarchy, it fails. Thanks!

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • How do I return a bit from a stored procedure with nHibernate

    - by tigermain
    I am using nHibernate in my project but I have a stored procedure which just returns a boolen of success or now. How do I code this in c#? I have tried the following but it doesnt like cause I dont have a mapping for bool!!! {"No persister for: System.Boolean, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"} IQuery query = NHibernateSession.CreateSQLQuery("EXEC MyDatabase.dbo.[ContentProvider_Import] :ContentProviderImportLogId", "success", typeof(bool)) .SetInt32("ContentProviderImportLogId", log.Id); var test = query.UniqueResult<bool>(); and the same result from IQuery query = NHibernateSession.CreateSQLQuery("EXEC MyDatabase.dbo.[ContentProvider_Import] :ContentProviderImportLogId") .AddEntity(typeof(bool)) .SetInt32("ContentProviderImportLogId", log.Id); var test = query.UniqueResult<bool>();

    Read the article

  • click buttons error

    - by sara
    I will retrieve student information (id -number- name) from a database (MySQL) as a list view, each student have 2 buttons (delete - alert ) and radio buttons Every thing is ok, but how can I make an onClickListener, for example for the delete button because I try lots of examples, I heard that I can use (custom list or get view or direct onClickListener as in my code (but it is not working ) or Simple Cursor Adapter) I do not know what to use, I looked around for examples that can help me, but in my case but I did not find any so I hope this be reference for anyone have the same problem. this is my code which I use direct onClick with Simple Adapter public class ManageSection extends ListActivity { //ProgresogressDialog pDialog; private ProgressDialog pDialog; // Creating JSON Parser object // Creating JSON Parser object JSONParser jParser = new JSONParser(); //class boolean x =true; Button delete; ArrayList<HashMap<String, String>> studentList; //url to get all products list private static String url_all_student = "http://10.0.2.2/SmsPhp/view_student_info.php"; String cl; // JSON Node names private static final String TAG_SUCCESS = "success"; private static final String TAG_student = "student"; private static final String TAG_StudentID = "StudentID"; private static final String TAG_StudentNo = "StudentNo"; private static final String TAG_FullName = "FullName"; private static final String TAG_Avatar="Avatar"; HashMap<String, String> selected_student; // course JSONArray JSONArray student = null; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.manage_section); studentList = new ArrayList<HashMap<String, String>>(); ListView list1 = getListView(); list1.setAdapter(getListAdapter()); list1.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> adapterView, View view, int pos, long l) { selected_student =(HashMap<String, String>) studentList.get(pos); //member of your activity. delete =(Button)view.findViewById(R.id.DeleteStudent); cl=selected_student.get(TAG_StudentID); Toast.makeText(getBaseContext(),cl,Toast.LENGTH_LONG).show(); delete.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { Log.d("id: ",cl); Toast.makeText(getBaseContext(),cl,Toast.LENGTH_LONG).show(); } }); } }); new LoadAllstudent().execute(); } /** * Background Async Task to Load all student by making HTTP Request * */ class LoadAllstudent extends AsyncTask<String, String, String> { /** * Before starting background thread Show Progress Dialog * */ @Override protected void onPreExecute() { super.onPreExecute(); pDialog = new ProgressDialog(ManageSection.this); pDialog.setMessage("Loading student. Please wait..."); pDialog.setIndeterminate(false); } /** * getting All student from u r l * */ @Override protected String doInBackground(String... args) { // Building Parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); // getting JSON string from URL JSONObject json = jParser.makeHttpRequest(url_all_student, "GET", params); // Check your log cat for JSON response Log.d("All student : ", json.toString()); try { // Checking for SUCCESS TAG int success = json.getInt(TAG_SUCCESS); if (success == 1) { // student found // Getting Array of course student = json.getJSONArray(TAG_student); // looping through All courses for (int i = 0; i < student.length(); i++)//course JSONArray { JSONObject c = student.getJSONObject(i); // read first // Storing each json item in variable String StudentID = c.getString(TAG_StudentID); String StudentNo = c.getString(TAG_StudentNo); String FullName = c.getString(TAG_FullName); // String Avatar = c.getString(TAG_Avatar); // creating new HashMap HashMap<String, String> map = new HashMap<String, String>(); // adding each child node to HashMap key => value map.put(TAG_StudentID, StudentID); map.put(TAG_StudentNo, StudentNo); map.put(TAG_FullName, FullName); // adding HashList to ArrayList studentList.add(map); } } else { x=false; } } catch (JSONException e) { e.printStackTrace(); } return null; } /** * After completing background task Dismiss the progress dialog * **/ protected void onPostExecute(String file_url) { // dismiss the dialog after getting all products pDialog.dismiss(); if (x==false) Toast.makeText(getBaseContext(),"no student" ,Toast.LENGTH_LONG).show(); ListAdapter adapter = new SimpleAdapter( ManageSection.this, studentList, R.layout.list_student, new String[] { TAG_StudentID, TAG_StudentNo,TAG_FullName}, new int[] { R.id.StudentID, R.id.StudentNo,R.id.FullName}); setListAdapter(adapter); // Updating parsed JSON data into ListView } } } So what do you think, why doesn't the delete button work? There is no error in my log cat. What is the alternative way ?.. what should I do ?

    Read the article

  • What CPAN module can summarize arbitrary error logs?

    - by mithaldu
    I'm maintaining some website code that will soon dump all its errors and warnings into a log file. In order to make this a bit more pro-active I plan to parse this log file daily, summarize the warnings and errors (i.e. count the occurrence of each specific one and group by either warning/error) and then email this to the devs on the project. This would likely admittedly be rather trivial with a hash and some further fiddling, I wondered if there is a suitable module on CPAN that I could use to do this task. It would either be one that summarizes specifically Perl error/warnings logs or one that summarizes arbitrary text files. Any suggestions?

    Read the article

  • What are the common issues that can cause slow boot times of Windows CE6 Images?

    - by Psychic
    I am relatively new to Platform Builder, and whilst I am able to produce nk.bin files, they boot very slowly, 80-100 seconds, so I think there may be some checkbox somewhere that I need to set (or clear)! I've already removed kitl, profiling, etc in the project settings, and set the project to 'release build' & 'ship'. When I looked at the startup event log (in debug), there doesn't appear to be any specific point where it is slow. The log pretty much scrolls all the way through with no major pauses. One thing I found strange was that although the nk.bin file was a lot smaller in release build (just under 12Mb), the boot time didn't noticeably change from the debug build... The board is a Vortex86DX_60A and I'm building CE6. Are there any 'common builder mistakes' that I may be missing here, or is this going to be something a little deeper?

    Read the article

  • Using the stadard Java logging, is it possible to restart logs after a certain period?

    - by Fry
    I have some java code that will be running as an importer for data for a much larger project. The initial logging code was done with the java.util.logging classes, so I'd like to keep it if possible, but it seems to be a little inadequate now given he amount of data passing through the importer. Often times in the system, the importer will get data that the main system doesn't have information for or doesn't match the system's data so it is ignored but a message is written to the log about what information was dropped and why it wasn't imported. The problem is that this tends to grow in size very quickly, so we'd like to be able to start a fresh log daily or weekly. Does anybody have an idea if this can be done in the logging classes or would I have to switch to log4j or custom? Thanks for any help!

    Read the article

  • Make: how to force make?

    - by HH
    The command $ make all gives errors such as rm: cannot remove '.lambda': No such file or directory so it stops. How can I force-make? Makefile all: make clean make .lambda make .lambda_t make .activity make .activity_t_lambda clean: rm .lambda .lambda_t .activity .activity_t_lambda .lambda: awk '{printf "%.4f \n", log(2)/log(2.71828183)/$$1}' t_year > .lambda .lambda_t: paste .lambda t_year > .lambda_t .activity: awk '{printf "%.4f \n", $$1*2.71828183^(-$$1*$$2)}' .lambda_t > .activity .activity_t_lambda: paste .activity t_year .lambda | sed -e 's@\t@\t\&\t@g' -e 's@$$@\t\\\\@g' | tee > .activity_t_lambda > ../RESULTS/currentActivity.tex

    Read the article

  • Need help getting Suspend to work in Ubuntu on laptop

    - by Aerik
    I've been doing a lot of research, but I've got to admit right out front that I'm not even sure exactly what is the right question. I've installed Kubuntu 10.4 on a Panasonic Toughbook CF-29. When I try to uses "suspend", the screen flickers, and then the hard drive light goes off but the power light stays on. I looked at the /var/log/pm-suspend.log, but I don't seen any errors... though I'm not sure what success should look like either. So I guess my real question is a bit more accurately stated as "How do I troubleshoot suspend not working in Kubuntu on a laptop?" Thanks, Aerik

    Read the article

  • Let your Signature Experience drive IT-decision making

    - by Tania Le Voi
    Today’s CIO job description:  ‘’Align IT infrastructure and solutions with business goals and objectives ; AND while doing so reduce costs; BUT ALSO, be innovative, ensure the architectures are adaptable and agile as we need to act today on the changes that we may request tomorrow.”   Sound like an unachievable request? The fact is, reality dictates that CIO’s are put under this type of pressure to deliver more with less. In a past career phase I spent a few years as an IT Relationship Manager for a large Insurance company. This is a role that we see all too infrequently in many of our customers, and it’s a shame.  The purpose of this role was to build a bridge, a relationship between IT and the business. Key to achieving that goal was to ensure the same language was being spoken and more importantly that objectives were commonly understood - hence service and projects were delivered to time, to budget and actually solved the business problems. In reality IT and the business are already married, but the relationship is most often defined as ‘supplier’ of IT rather than a ‘trusted partner’. To deliver business value they need to understand how to work together effectively to attain this next level of partnership. The Business cannot compete if they do not get a new product to market ahead of the competition, or for example act in a timely manner to address a new industry problem such as a legislative change. An even better example is when the Application or Service fails and the Business takes a hit by bad publicity, being trending topics on social media and losing direct revenue from online channels. For this reason alone Business and IT need the alignment of their priorities and deliverables now more than ever! Take a look at Forrester’s recent study that found ‘many IT respondents considering themselves to be trusted partners of the business but their efforts are impaired by the inadequacy of tools and organizations’.  IT Meet the Business; Business Meet IT So what is going on? We talk about aligning the business with IT but the reality is it’s difficult to do. Like any relationship each side has different goals and needs and language can be a barrier; business vs. technology jargon! What if we could translate the needs of both sides into actionable information, backed by data both sides understand, presented in a meaningful way?  Well now we can with the Business-Driven Application Management capabilities in Oracle Enterprise Manager 12cR2! Enterprise Manager’s Business-Driven Application Management capabilities provide the information that IT needs to understand the impact of its decisions on business criteria.  No longer does IT need to be focused solely on speeds and feeds, performance and throughput – now IT can understand IT’s impact on business KPIs like inventory turns, order-to-cash cycle, pipeline-to-forecast, and similar.  Similarly, now the line of business can understand which IT services are most critical for the KPIs they care about. There are a good deal of resources on Oracle Technology Network that describe the functionality of these products, so I won’t’ rehash them here.  What I want to talk about is what you do with these products. What’s next after we meet? Where do you start? Step 1:  Identify the Signature Experience. This is THE business process (or set of processes) that is core to the business, the one that drives the economic engine, the process that a customer recognises the company brand for, reputation, the customer experience, the process that a CEO would state as his number one priority. The crème de la crème of your business! Once you have nailed this it gets easy as Enterprise Manager 12c makes it easy. Step 2:  Map the Signature Experience to underlying IT.  Taking the signature experience, map out the touch points of the components that play a part in ensuring this business transaction is successful end to end, think of it like mapping out a critical path; the applications, middleware, databases and hardware. Use the wealth of Enterprise Manager features such as Systems, Services, Business Application Targets and Business Transaction Management (BTM) to assist you. Adding Real User Experience Insight (RUEI) into the mix will make the end to end customer satisfaction story transparent. Work with the business and define meaningful key performance indicators (KPI’s) and thresholds to enable you to report and action upon. Step 3:  Observe the data over time.  You now have meaningful insight into every step enabling your signature experience and you understand the implication of that experience on your underlying IT.  Watch if for a few months, see what happens and reconvene with your business stakeholders and set clear and measurable targets which can re-define service levels.  Step 4:  Change the information about which you and the business communicate.  It’s amazing what happens when you and the business speak the same language.  You’ll be able to make more informed business and IT decisions. From here IT can identify where/how budget is spent whether on the level of support, performance, capacity, HA, DR, certification etc. IT SLA’s no longer need be focused on metrics such as %availability but structured around business process requirements. The power of this way of thinking doesn’t end here. IT staff get to see and understand how their own role contributes to the business making them accountable for the business service. Take a step further and appraise your staff on the business competencies that are linked to the service availability. For the business, the language barrier is removed by producing targeted reports on the signature experience core to the business and therefore key to the CEO. Chargeback or show back becomes easier to justify as the ‘cost of day per outage’ can be more easily calculated; the business will be able to translate the cost to the business to the cost/value of the underlying IT that supports it. Used this way, Oracle Enterprise Manager 12c is a key enabler to a harmonious relationship between the end customer the business and IT to deliver ultimate service and satisfaction. Just engage with the business upfront, make the signature experience visible and let Enterprise Manager 12c do the rest. In the next blog entry we will cover some of the Enterprise Manager features mentioned to enable you to implement this new way of working.  

    Read the article

  • Problem building PyGTK on CentOS

    - by Marcelo Cantos
    I am trying to build PyGTK on CentOS for a non-standard Python (2.6, vs the out-of-the-box 2.4). It requires that I first build pygobject. pygobject-2.18.0 fails at the configure step. The error messages is as follows: checking for GLIB - version >= 2.14.0... no *** Could not run GLIB test program, checking why... *** The test program failed to compile or link. See the file config.log for the *** exact error that occured. This usually means GLIB is incorrectly installed. configure: error: maybe you want the pygobject-2-4 branch? I have downloaded, built and successfully installed glib. The config.log file contains the following output: configure:6893: gcc -E conftest.c conftest.c:13:28: error: ac_nonexistent.h: No such file or directory What am I doing wrong?

    Read the article

  • Using java.util.logging, is it possible to restart logs after a certain period of time?

    - by Fry
    I have some java code that will be running as an importer for data for a much larger project. The initial logging code was done with the java.util.logging classes, so I'd like to keep it if possible, but it seems to be a little inadequate now given he amount of data passing through the importer. Often times in the system, the importer will get data that the main system doesn't have information for or doesn't match the system's data so it is ignored but a message is written to the log about what information was dropped and why it wasn't imported. The problem is that this tends to grow in size very quickly, so we'd like to be able to start a fresh log daily or weekly. Does anybody have an idea if this can be done in the logging classes or would I have to switch to log4j or custom? Thanks for any help!

    Read the article

  • Executing python subprocess via git hook

    - by aljesco
    I'm running Gitolite over the Git repository and I have post-receive hook there written in Python. I need to execute "git" command at git repository directory. There are few lines of code: proc = subprocess.Popen(['git', 'log', '-n1'], cwd='/home/git/repos/testing.git' stdout=subprocess.PIPE, stderr=subprocess.PIPE) proc.communicate() After I make new commit and push to repository, scripts executes and says fatal: Not a git repository: '.' If I run proc = subprocess.Popen(['pwd'], cwd='/home/git/repos/testing.git' stdout=subprocess.PIPE, stderr=subprocess.PIPE) it says, as expected, correct path to git repository (/home/git/repos/testing.git) If I run this script manually from bash, it works correct and show correct output of "git log". What I'm doing wrong?

    Read the article

  • Alternative databases to use when putting IIS Logs into a database using LogParser

    - by Robin Day
    We have run some scripts that use LogParser to dump our IIS logs into a SQL Server database. We can then query this to get simple stats on hits, usage etc. It's also good when linking it to error log databases and performance counter database to compare usage with errors, etc. Having implemented this for just one system and for the last 2-3 weeks we already have a 5GB database with around 10 million records. This is making any queries to this database quite slow and will no doubt cause storage issues if we continue to log as we are. Can anyone suggest any alternative databases that we could use for this data that would be more efficient for such logs? I'd be particularly interested in any experience of Google's BigTable or Amazon's SimbleDB. Are either of these suitable for reporting queries? COUNTs, GROUP BYs, PIVOTs?

    Read the article

  • Writing to a comet stream using tomcat 6.0

    - by user301247
    Hey I'm new to java servlets and I am trying to write one that uses comet so that I can create a long polling Ajax request. I can successfully start the stream and perform operations but I can't write anything out. Here is my code: public class CometTestServlet extends HttpServlet implements CometProcessor { /** * */ private static final long serialVersionUID = 1070949541963627977L; private MessageSender messageSender = null; protected ArrayList<HttpServletResponse> connections = new ArrayList<HttpServletResponse>(); public void event(CometEvent cometEvent) throws IOException, ServletException { HttpServletRequest request = cometEvent.getHttpServletRequest(); HttpServletResponse response = cometEvent.getHttpServletResponse(); //final PrintWriter out = response.getWriter(); if (cometEvent.getEventType() == CometEvent.EventType.BEGIN) { PrintWriter writer = response.getWriter(); writer.println("<!doctype html public \"-//w3c//dtd html 4.0 transitional//en\">"); writer.println("<head><title>JSP Chat</title></head><body bgcolor=\"#FFFFFF\">"); writer.println("</body></html>"); writer.flush(); cometEvent.setTimeout(10 * 1000); //cometEvent.close(); } else if (cometEvent.getEventType() == CometEvent.EventType.ERROR) { log("Error for session: " + request.getSession(true).getId()); synchronized(connections) { connections.remove(response); } cometEvent.close(); } else if (cometEvent.getEventType() == CometEvent.EventType.END) { log("End for session: " + request.getSession(true).getId()); synchronized(connections) { connections.remove(response); } PrintWriter writer = response.getWriter(); writer.println("</body></html>"); cometEvent.close(); } else if (cometEvent.getEventType() == CometEvent.EventType.READ) { //handleReadEvent(cometEvent); InputStream is = request.getInputStream(); byte[] buf = new byte[512]; do { int n = is.read(buf); //can throw an IOException if (n > 0) { log("Read " + n + " bytes: " + new String(buf, 0, n) + " for session: " + request.getSession(true).getId()); } else if (n < 0) { //error(cometEvent, request, response); return; } } while (is.available() > 0); } } Any help would be appreciated.

    Read the article

  • Compressing large text data before storing into db?

    - by Steel Plume
    Hello, I have application which retrieves many large log files from a system LAN. Currently I put all log files on Postgresql, the table has a column type TEXT and I don't plan any search on this text column because I use another external process which nightly retrieves all files and scans for sensitive pattern. So the column value could be also a BLOB or a CLOB, but now my question is the following, the database has already its compression system, but could I improve this compression manually like with common compressor utilities? And above all WHAT IF I manually pre-compress the large file and then I put as binary into the data table, is it unuseful as database system provides its internal compression?

    Read the article

  • Sending the files (At least 11 files) from folder through web service to android app.

    - by Shashank_Itmaster
    Hello All, I stuck in middle of this situation,Please help me out. My question is that I want to send files (Total 11 PDF Files) to android app using web service. I tried it with below code.Main Class from which web service is created public class MultipleFilesImpl implements MultipleFiles { public FileData[] sendPDFs() { FileData fileData = null; // List<FileData> filesDetails = new ArrayList<FileData>(); File fileFolder = new File( "C:/eclipse/workspace/AIPWebService/src/pdfs/"); // File fileTwo = new File( // "C:/eclipse/workspace/AIPWebService/src/simple.pdf"); File sendFiles[] = fileFolder.listFiles(); // sendFiles[0] = fileOne; // sendFiles[1] = fileTwo; DataHandler handler = null; char[] readLine = null; byte[] data = null; int offset = 0; int numRead = 0; InputStream stream = null; FileOutputStream outputStream = null; FileData[] filesData = null; try { System.out.println("Web Service Called Successfully"); for (int i = 0; i < sendFiles.length; i++) { handler = new DataHandler(new FileDataSource(sendFiles[i])); fileData = new FileData(); data = new byte[(int) sendFiles[i].length()]; stream = handler.getInputStream(); while (offset < data.length && (numRead = stream.read(data, offset, data.length - offset)) >= 0) { offset += numRead; } readLine = Base64Coder.encode(data); offset = 0; numRead = 0; System.out.println("'Reading File............................"); System.out.println("\n"); System.out.println(readLine); System.out.println("Data Reading Successful"); fileData.setFileName(sendFiles[i].getName()); fileData.setFileData(String.valueOf(readLine)); readLine = null; System.out.println("Data from bean " + fileData.getFileData()); outputStream = new FileOutputStream("D:/" + sendFiles[i].getName()); outputStream.write(Base64Coder.decode(fileData.getFileData())); outputStream.flush(); outputStream.close(); stream.close(); // FileData fileDetails = new FileData(); // fileDetails = fileData; // filesDetails.add(fileData); filesData = new FileData[(int) sendFiles[i].length()]; } // return fileData; } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } return filesData; } } Also The Interface MultipleFiles:- public interface MultipleFiles extends Remote { public FileData[] sendPDFs() throws FileNotFoundException, IOException, Exception; } Here I am sending an array of bean "File Data",having properties viz. FileData & FileName. FileData- contains file data in encoded. FileName- encoded file name. The Bean:- (FileData) public class FileData { private String fileName; private String fileData; public String getFileName() { return fileName; } public void setFileName(String fileName) { this.fileName = fileName; } public String getFileData() { return fileData; } public void setFileData(String string) { this.fileData = string; } } The android DDMS gives out of memory exception when tried below code & when i tried to send two files then only first file is created. public class PDFActivity extends Activity { private final String METHOD_NAME = "sendPDFs"; private final String NAMESPACE = "http://webservice.uks.com/"; private final String SOAP_ACTION = NAMESPACE + METHOD_NAME; private final String URL = "http://192.168.1.123:8080/AIPWebService/services/MultipleFilesImpl"; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); TextView textViewOne = (TextView) findViewById(R.id.textViewOne); try { SoapObject soapObject = new SoapObject(NAMESPACE, METHOD_NAME); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope( SoapEnvelope.VER11); envelope.setOutputSoapObject(soapObject); textViewOne.setText("Web Service Started"); AndroidHttpTransport httpTransport = new AndroidHttpTransport(URL); httpTransport.call(SOAP_ACTION, envelope); // SoapObject result = (SoapObject) envelope.getResponse(); Object result = envelope.getResponse(); Log.i("Result", result.toString()); // String fileName = result.getProperty("fileName").toString(); // String fileData = result.getProperty("fileData").toString(); // Log.i("File Name", fileName); // Log.i("File Data", fileData); // File pdfFile = new File(fileName); // FileOutputStream outputStream = // openFileOutput(pdfFile.toString(), // MODE_PRIVATE); // outputStream.write(Base64Coder.decode(fileData)); Log.i("File", "File Created"); // textViewTwo.setText(result); // Object result = envelope.getResponse(); // FileOutputStream outputStream = openFileOutput(name, mode) } catch (Exception e) { e.printStackTrace(); } } } Please help with some explanation or changes in my code. Thanks in Advance.

    Read the article

  • Is there a way to TouchEndInside in Mobile Safari?

    - by Ken Sykora
    I'm trying to determine whether a users does a touchupinside in mobile safari for an iPhone web app. So far I've been unsuccessful. touchend event fires regardless of where the touchup event happens on the screen, and I can't seem to discern that the target has changed by anything in the event argument. Can anyone point me in the right direction on how to capture a touchendinside (vs. touchendoutside) event using javascript? $('a.arrow').bind('touchend',function(e) { console.log($(e.srcElement)); //both of these always return the same element console.log($(e.toElement)); //both of these always return the same element });

    Read the article

  • Node.js Creating and Deleting a File Recursively

    - by Matt
    I thought it would be a cool experiment to have a for loop and create a file hello.txt and then delete it with unlink. I figured that if fs.unlink is the delete file procedure in Node, then fs.link must be the create file. However, my code will only delete, and it will not create, not even once. Even if I separate the fs.link code into a separate file, it still will not create my file hello.txt. Below is my code: var fs = require('fs'), for(var i=1;i<=10;i++){ fs.unlink('./hello.txt', function (err) { if (err){ throw err; } else { console.log('successfully deleted file'); } fs.link('./hello.txt', function (err) { if (err){ throw err; } else { console.log('successfully created file'); } }); }); } http://nodejs.org/api/fs.html#fs_fs_link_srcpath_dstpath_callback Thank you!

    Read the article

  • Am I reindexing this Sphinx index correctly?

    - by Ethan
    According to the Thinking Sphinx docs... Turning on delta indexing does not remove the need for regularly running a full re-index ... So I set up this cron job... 50 10 * * * cd /var/www/my_app/current && /opt/ruby/bin/rake thinking_sphinx:index RAILS_ENV=production >> /var/www/my_app/current/log/reindexing.log 2>&1 Is that a reasonable way to do it? Should I be doing something different?

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >