Search Results

Search found 21131 results on 846 pages for 'binary log'.

Page 371/846 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • How to pull one commit at a time from a remote git repository?

    - by Norman Ramsey
    I'm trying to set up a darcs mirror of a git repository. I have something that works OK, but there's a significant problem: if I push a whole bunch of commits to the git repo, those commits get merged into a single darcs patchset. I really want to make sure each git commit gets set up as a single darcs patchset. I bet this is possible by doing some kind of git fetch followed by interrogation of the local copy of the remote branch, but my git fu is not up to the job. Here's the (ksh) code I'm using now, more or less: git pull -v # pulls all the commits from remote --- bad! # gets information about only the last commit pulled -- bad! author="$(git log HEAD^..HEAD --pretty=format:"%an <%ae>")" logfile=$(mktemp) git log HEAD^..HEAD --pretty=format:"%s%n%b%n" > $logfile # add all new files to darcs and record a patchset. this part is OK darcs add -q --umask=0002 -r . darcs record -a -A "$author" --logfile="$logfile" darcs push -a rm -f $logfile My idea is Try git fetch to get local copy of the remote branch (not sure exactly what arguments are needed) Somehow interrogate the local copy to get a hash for every commit since the last mirroring operation (I have no idea how to do this) Loop through all the hashes, pulling just that commit and recording the associated patchset (I'm pretty sure I know how to do this if I get my hands on the hash) I'd welcome either help fleshing out the scenario above or suggestions about something else I should try. Ideas?

    Read the article

  • Function inside jquery returns undefined

    - by steamboy
    Hello Guys, The function I called inside jquery returns undefined. I checked the function and it returns correct data when I firebugged it. function addToPlaylist(component_type,add_to_pl_value,pl_list_no) { add_to_pl_value_split = add_to_pl_value.split(":"); $.ajax({ type: "POST", url: "ds/index.php/playlist/check_folder", data: "component_type="+component_type+"&value="+add_to_pl_value_split[1], success: function(msg) { if(msg == 'not_folder') { if(component_type == 'video') { rendered_item = render_list_item_video(add_to_pl_value_split[0],add_to_pl_value_split[1],pl_list_no) } else if(component_type == 'image') { rendered_item = render_list_item_image(add_to_pl_value_split[0],add_to_pl_value_split[1],pl_list_no) } } else { //List files from folder folder_name = add_to_pl_value_split[1].replace(' ','-'); var x = msg; // json eval('var file='+x); var rendered_item; for ( var i in file ) { //console.log(file[i]); if(component_type == 'video') { rendered_item = render_list_item_video(folder_name+'-'+i,file[i],pl_list_no) + rendered_item; } if(component_type == 'image') { rendered_item = render_list_item_image(folder_name+'-'+i,file[i],pl_list_no) + rendered_item; } } } $("#files").html(filebrowser_list); //Reload Playlist console.log(rendered_item); return rendered_item; }, error: function() { alert("An error occured while updating. Try again in a while"); } }) } $('document').ready(function() { addToPlaylist($('#component_type').val(),ui_item,0); //This one returns undefined });

    Read the article

  • How to trace the connection pool in a Java Web application - DBMS_APPLICATION_INFO

    - by Cleiton Garcia
    Hello, I need improve the traceability in a Web Application that usually run on fixed db user. The DBA should have a fast access for the information about the heavy users that are degrading the database. 5 years ago, I implemented a .NET ORM engine which makes a log of user and the server using the DBMS_APPLICATION_INFO package. Using a wrapper above the connection manager with the following code: DBMS_APPLICATION_INFO.SET_MODULE('" + User + " - " + appServerMachine + "',''); Each time that a connection get a connection from the pool, the package is executed to log the information in the V$SESSION. Has anyone discover or implemented a solution for this problem using the Toplink or Hibernate? Is there a default implementation for this problem? I found here a solutions as I implemented 5 years ago, but I'd like to know with anyone have a better solution and integrated with the ORM. http://stackoverflow.com/questions/53379/using-dbmsapplicationinfo-with-jboss My application is above Spring, the DAO are implemented with JPA (using hibernate) and actually running directly in Tomcat, with plans to (next year) migrate to SAP Netwevare Application Server. Thanks.

    Read the article

  • chroot'ing SSH home directories, shell problem.

    - by Hamza
    Hi folks, I am trying to chroot my SSH users to their home directories and it seems to work.. in a strange way. Here is what I have in my sshd_config: Match group restricthome ChrootDirectory %h The permissions on the user directories looks like this: drwxr-xr-x 2 root root 1024 May 11 13:45 [user]/ And I can see that the user logs in successfully: May 11 13:49:23 box sshd[5695]: Accepted password for [user] from x.x.x.x port 2358 ssh2 (with no error messages after this) But after entering the password the PuTTY window closes down. This is a wild guess, but could it be because the user's shell is set to /bin/bash and it can't execute because of the chroot? If so, could you give me pointers on how to fix it? Would simply copying the bash binary into user's home directory and modyfying the shell work? How would I deal with the dependencies, ldd shows quite a few of those :) Comments/suggestions will be appreciated. Thanks.

    Read the article

  • gdb reverse debugging error

    - by Werner
    Hi, i started to try reverse debugging with gdb 7, followin the tutorial: http://www.sourceware.org/gdb/wiki/ProcessRecord/Tutorial and I thought, great! Then I started to debug a real program which gives an error at the end. So I run it with gdb, and I put a breakpoint just before the place I think the error appears. Then I type "record" in order to start to recrd actions for future reverse-debugging. But after some steps I get Process record doesn't support instruction 0xf0d at address 0x2aaaab4c4b4e. Process record: failed to record execution log. Program received signal SIGTRAP, Trace/breakpoint trap. 0x00002aaaab4c4b4e in memcpy () from /lib64/libc.so.6 (gdb) n Single stepping until exit from function memcpy, which has no line number information. Process record doesn't support instruction 0xf0d at address 0x2aaaab4c4b4e. Process record: failed to record execution log. Program received signal SIGABRT, Aborted. 0x00002aaaab4c4b4e in memcpy () from /lib64/libc.so.6 Before I look at in in detail, I wonder if this feature is still buggy, or if I should start to record from the beginning. Where this "record" error happens, just an object is created as a copy of other: Thanks

    Read the article

  • javascript table sorting/paging (client-side). How big is too big?

    - by Aheho
    I'm using a jQuery plugin called Tablesorter to do client-side sorting of a log table in one of my applications. I am also making use of the tablepager add-in. I really like the responsiveness that client-side sorting and paging brings to the party. I also like how you don't have to hit the web server or database repeatedly. However I can see that, in time, the log I'm displaying could grow quite large. I'm sure there comes a point where client-side paging and sorting is going to be impractical. What point will this technique begin to collapse under it's own weight? 500 records? 2000 records? 10,000 records? EDIT: In nutshell, what criteria would you use to determine if you are going to use client-side sorting/paging as opposed to server-side paging? Does the size of expected result set factor into your decision? Where is the tipping point?

    Read the article

  • ruby on rails delajed_job failing with rvm

    - by mottalrd
    I have delayed_job installed and I start the daemon to run the jobs with this ruby script require 'rubygems' require 'daemon_spawn' $: << '.' RAILS_ROOT = File.expand_path(File.join(File.dirname(__FILE__), '..')) class DelayedJobWorker < DaemonSpawn::Base def start(args) ENV['RAILS_ENV'] ||= args.first || 'development' Dir.chdir RAILS_ROOT require File.join('config', 'environment') Delayed::Worker.new.start end def stop system("kill `cat #{RAILS_ROOT}/tmp/pids/delayed_job.pid`") end end DelayedJobWorker.spawn!(:log_file => File.join(RAILS_ROOT, "log", "delayed_job.log"), :pid_file => File.join(RAILS_ROOT, 'tmp', 'pids', 'delayed_job.pid'), :sync_log => true, :working_dir => RAILS_ROOT) If I run the command with rvmsudo it works perfectly If I simply use the ruby command without rvm it fails and this is the output, but I have no idea why this happens. Could you give me some clue? Thank you user@mysystem:~/redeal.it/application$ ruby script/delayed_job start production /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:16:in `kill': Operation not permitted (Errno::EPERM) from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:16:in `alive?' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:125:in `alive?' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:176:in `block in start' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:176:in `select' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:176:in `start' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:165:in `spawn!' from script/delayed_job:37:in `<main>'

    Read the article

  • Basic File upload in GWT

    - by Maksim
    I'm trying to figure out how to upload one file using GWTs FileUpload widget. I'm using GWT and Google AppEngine with Java but I would like to upload file to my own Linux server. I have the following code already but now I can't figure out how to submit my file to the Google AppServer server and save it to another server: public class FileUploader{ private ControlPanel cp; private FormPanel form = new FormPanel(); private FileUpload fu = new FileUpload(); public FileUploader(ControlPanel cp) { this.cp = cp; this.cp.setPrimaryArea(getFileUploaderWidget()); } @SuppressWarnings("deprecation") public Widget getFileUploaderWidget() { form.setEncoding(FormPanel.ENCODING_MULTIPART); form.setMethod(FormPanel.METHOD_POST); // form.setAction(/* WHAT SHOULD I PUT HERE */); VerticalPanel holder = new VerticalPanel(); fu.setName("upload"); holder.add(fu); holder.add(new Button("Submit", new ClickHandler() { public void onClick(ClickEvent event) { GWT.log("You selected: " + fu.getFilename(), null); form.submit(); } })); form.addSubmitHandler(new FormPanel.SubmitHandler() { public void onSubmit(SubmitEvent event) { if (!"".equalsIgnoreCase(fu.getFilename())) { GWT.log("UPLOADING FILE????", null); // NOW WHAT???? } else{ event.cancel(); // cancel the event } } }); form.addSubmitCompleteHandler(new FormPanel.SubmitCompleteHandler() { public void onSubmitComplete(SubmitCompleteEvent event) { Window.alert(event.getResults()); } }); form.add(holder); return form; } } Now, what do I need to do next? What do i need to put in web.xml and how do I write my servlet so i can store file and return url of that object (if possible)

    Read the article

  • Sql Server Maintenance Plan Tasks & Completion

    - by Ben
    Hi All, I have a maintenance plan that looks like this... Client 1 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client 2 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client N ... Import Data and Process Data are calling jobs and Post Process is an Execute Sql task. If Import Data or Process Data Fail, it goes to the next client Import Data... Both Import Data and Process Data are jobs that contain SSIS packages that are using the built-in SQL logging provider. My expectation with the configuration as it stands is: Client 1 Import Data Runs: Failure - Client 2 Import Data | Success Process Data Process Data Runs: Failure - Client 2 Import Data | Success Post Process Post Process Runs: Completion - Success or Failure - Next Client Import Data This isn't what I'm seeing in my logs though... I see several Client Import Data SSIS log entries, then several Post Process log entries, then back to Client Import Data! Arg!! What am I doing wrong? I didn't think the "success" piece of Client 1 Import Data would kick off until it... well... succeeded aka finished! The logs seem to indicate otherwise though... I really need these tasks to be consecutive not concurrent. Is this possible? Thanks!

    Read the article

  • Cant get a location

    - by user1891806
    Iam trying to get a location (latitute and longitude), I cant figure out what Im doing wrong but my location keeps returning null. Anybody knows what I am doing wrong? public class ParseJSON extends Activity implements LocationListener { private TextView latituteField; private TextView longitudeField; LocationManager lm; String provider; int lat; int lon; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.test); lm = (LocationManager) getSystemService(Context.LOCATION_SERVICE); Criteria crit = new Criteria(); provider = lm.getBestProvider(crit, false); Location location = lm.getLastKnownLocation(provider); if (location != null){ lat = (int) location.getLatitude(); lon = (int) location.getLongitude(); latituteField = (TextView) findViewById(R.id.lattest); longitudeField = (TextView) findViewById(R.id.lontest); latituteField.setText(lat); longitudeField.setText(String.valueOf(lon)); } else { Toast.makeText(ParseJSON.this, "Couldnt get location", Toast.LENGTH_SHORT); } String readWeatherFeed = readWeatherFeed(); try { JSONArray jsonArray = new JSONArray(readWeatherFeed); Log.i(ParseJSON.class.getName(), "Number of entries " + jsonArray.length()); for (int i = 0; i < jsonArray.length(); i++) { JSONObject jsonObject = jsonArray.getJSONObject(i); Log.i(ParseJSON.class.getName(), jsonObject.getString("text")); } } catch (Exception e) { e.printStackTrace(); } } I dont have a clue, thanks in advance

    Read the article

  • sar command to generate only CPU utilization and network statistics in Linux

    - by Vijay Shankar Kalyanaraman
    I want to be able to generate the system activity report every 30 seconds or every minute and store in a file and use it for diagnostic purposes on my VM. So I give an output file for the sar command and read it using the "-f" option. But I only use the CPU utilization and network utilization part of the report and so rest is all that I don't want to save (waste the space in the disk to store these reports). Also the sar files that are generated are all binary. Is there a way to collect these stats for the CPU and network utilization alone? and so save almost 2/3rds the space on the disk?

    Read the article

  • how to clone the drag-event using jquery and jquery-ui.

    - by zjm1126
    i want to create a new '.b' div appendTo document.body, and it can dragable like its father, but i can not clone the drag event, how to do this, thanks this is my code : <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta name="viewport" content="width=device-width, user-scalable=no"> </head> <body> <style type="text/css" media="screen"> </style> <div id="map_canvas" style="width: 500px; height: 300px;background:blue;"></div> <div class=b style="width: 20px; height: 20px;background:red;position:absolute;left:700px;top:200px;"></div> <script src="jquery-1.4.2.js" type="text/javascript"></script> <script src="jquery-ui-1.8rc3.custom.min.js" type="text/javascript"></script> <script type="text/javascript" charset="utf-8"> $(".b").draggable({ start: function(event,ui) { //console.log(ui) //$(ui.helper).clone(true).appendTo($(document.body)) $(this).clone(true).appendTo($(document.body))//draggable is not be cloned, } }); $("#map_canvas").droppable({ drop: function(event,ui) { //console.log(ui.offset.left+' '+ui.offset.top) ui.draggable.remove(); } }); </script> </body> </html>

    Read the article

  • Apache multiple URL to one domain redirect

    - by Christian Moser
    For the last two day, I've been spending a lot of time to solve my problem, maybe someone can help me. Problem: I need to redirect different url's to one tomcat webbase-dir used for artifactory. following urls should point to the tomcat/artifactory webapp: maven-repo.example.local ; maven-repo.example.local/artifactory ; srv-example/artifactory Where maven-repo.example.local is the dns for the server-hostname: "srv-example" I'm accessing the tomcat app through the JK_mod module. The webapp is in the ROOT directory This is what I've got so far: <VirtualHost *:80> #If URL contains "artifactory" strip down and redirect RewriteEngine on RewriteCond %{HTTP_HOST} ^\artifactory\$ [NC] # (how can I remove 'artifactory' from the redirected parameters? ) RewriteRule ^(.*)$ http://maven-repo.example.local/$1 [R=301,L] ServerName localhost ErrorLog "logs/redirect-error_log" </VirtualHost> <VirtualHost *:80> ServerName maven-repo.example.local ErrorLog "logs/maven-repo.example.local-error.log" CustomLog "logs/maven-repo.example.local-access.log" common #calling tomcat webapp in ROOT JkMount /* ajp13w </VirtualHost> The webapp is working with "maven-repo.example.local", but with "maven-repo.example.local/artifactory" tomcat gives a 404 - "The requested resource () is not available." It seems that the mod_rewrite doesn't have taken any effect, even if I redirect to another page, e.g google.com I'm testing on windows 7 with maven-repo.example.local added in the "system32/drivers/hosts" file Thanks in advance!

    Read the article

  • Script throwing unexpected operator when using mysqldump

    - by Astron
    A portion of a script I use to backup MySQL databases has stopped working correctly after upgrading a Debian box to 6.0 Squeeze. I have tested the backup code via CLI and it works fine. I believe it is in the selection of the databases before the backup occurs, possibly something to do with the $skipdb variable. If there is a better way to perform the function then I'm will to try something new. Any insight would be greatly appreciated. $ sudo ./script.sh [: 138: information_schema: unexpected operator [: 138: -1: unexpected operator [: 138: mysql: unexpected operator [: 138: -1: unexpected operator Using bash -x script here is one of the iterations: + for db in '$DBS' + skipdb=-1 + '[' test '!=' '' ']' + for i in '$IGGY' + '[' mysql == test ']' + : + '[' -1 == -1 ']' ++ /bin/date +%F + FILE=/backups/hostname.2011-03-20.mysql.mysql.tar.gz + '[' no = yes ']' + /usr/bin/mysqldump --single-transaction -u root -h localhost '-ppassword' mysql + /bin/tar -czvf /backups/hostname.2011-03-20.mysql.mysql.tar.gz mysql.sql mysql.sql + rm -f mysql.sql Here is the code. if [ $MYSQL_UP = "yes" ]; then echo "MySQL DUMP" >> /tmp/update.log echo "--------------------------------" >> /tmp/update.log DBS="$($MYSQL -u $MyUSER -h $MyHOST -p"$MyPASS" -Bse 'show databases')" for db in $DBS do skipdb=-1 if [ "$IGGY" != "" ] ; then for i in $IGGY do [ "$db" == "$i" ] && skipdb=1 || : done fi if [ "$skipdb" == "-1" ] ; then FILE="$DEST$HOST.`$DATE +"%F"`.$db.mysql.tar.gz" if [ $ENCRYPT = "yes" ]; then $MYSQLDUMP -u $MyUSER -h $MyHOST -p"$MyPASS" $db > $db.sql && $TAR -czvf - $db.sql | $OPENSSL enc -aes-256-cbc -salt -out $FILE.enc -k $ENC_PASS && rm -f $db.sql else $MYSQLDUMP --single-transaction -u $MyUSER -h $MyHOST -p"$MyPASS" $db > $db.sql && $TAR -czvf $FILE $db.sql && rm -f $db.sql fi fi done fi

    Read the article

  • WCF service returns error 500 on /js request

    - by Cine
    I have a wcf service that randomly begins to fail when requesting the autogenerated javascript that wcf supports making. But I have no luck tracking down why. The js thing is part of the wcf featureset, so I dont know how it can suddenly begin to fail and be unable to work until IIS is recycled. The http log gives me: 2010-06-10 09:11:49 W3SVC2095255988 myip GET /path/myservice.svc/js _=1276161113900 80 - ip browser 500 0 0 So its an error 500, and that is about the only thing I can figure out. The event log contains no information. Requests to /path/myservice.svc works just fine. After recycling IIS it works again, and some days later it begins to fail until I recycle IIS. <service name="path.myservice" behaviorConfiguration="b"> <endpoint address="" behaviorConfiguration="eb" binding="webHttpBinding" contract="path.Imyservice" /> </service> ... <endpointBehaviors> <behavior name="eb"> <enableWebScript /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="b"> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> I dont see any problems in the web.config settings either. Any clues how I can track down what the problem is?

    Read the article

  • Debugging JSR 168 Portlet with spring, eclipse & pluto.

    - by mikep
    I am trying to set up a development environment to test Spring Portlet MVC for development of JSR 168 conforming portlets. I have the latest STS installed, which included Spring 2.5 and Eclipse (Catalina). This has been my environment to develop with Spring MVC, and that works fine using Apache as a local server for debugging. I found some instructions on the Pluto portal site on using Pluto as a remote debugging host for portlets. I have implemented those instructions. I am sending Eclipse into debug mode by right clicking on one of the JSPs and going into "debug as". My problem is that when I log into Pluto, it is not sending me into debug mode. I am seeing the default Pluto page as opposed to my portlet. My portlet has not been installed onto Pluto, and the instructions do not seem to require the portlet to be installed. To help, I have a screen shot at http://www.ceruleaninc.ca/pluto%5Fproblem.jpg, showing the following: Eclipse showing the remote debugging to localhost:8000 Tomcat showing the "Listening for transport dt_socket at address: 8000 The Catalina.bat jpda start command The Pluto Portal screen after log in Thanks much! I would welcome any advice on approaches to debugging portlets. I am not tied to pluto. There does seem to be a lack of detailed instructions on this topic.

    Read the article

  • SQL Server 2008 - Login failed for user 'user1' The user is not associated with a trusted SQL Server connection

    - by difek
    I have installed SQL Server 2008 R2 on Windows XP. In installation process I selected 'SQL Server and Windows Authentication Mode' When I click right button of the mouse in SQL Server Management Studio on Server - Security tab 'SQL server and Windows Authentication Mode' is selected. But when I click on my Database - Properties - View connection properties Authentication Method is set on Windows Authentication. To my database was added one user1 with password user1. But I can't log in to my database from C# (Visual Studio 2008) because error occurs: Login failed for user 'user1' The user is not associated with a trusted SQL Server connection What isn't right ? When I get: string connectionStr = @"Data Source=rmzcmp\SQLExpress;Initial Catalog=ResourcesTmp;Integrated Security=True"; I have following error: {"Cannot open database \"ResourcesTmp\" requested by the login. The login failed.\r\nLogin failed for user 'RMZCMP\rm'."} rm is my original user name on which I log in to my computer. When I get rm I have error: {"Login failed for user 'rm'. The user is not associated with a trusted SQL Server connection."} again. Regards

    Read the article

  • Getting value of "i" from GEvent

    - by Cosizzle
    Hello, I'm trying to add an event listener to each icon on the map when it's pressed. I'm storing the information in the database and the value that I'm wanting to retrive is "i" however when I output "i", I get it's last value which is 5 (there are 6 objects being drawn onto the map) Below is the code, what would be the best way to get the value of i, and not the object itself. var drawLotLoc = function(id) { var lotLoc = new GIcon(G_DEFAULT_ICON); // create icon object lotLoc.image = url+"images/markers/lotLocation.gif"; // set the icon image lotLoc.shadow = ""; // no shadow lotLoc.iconSize = new GSize(24, 24); // set the size var markerOptions = { icon: lotLoc }; $.post(opts.postScript, {action: 'drawlotLoc', id: id}, function(data) { var markers = new Array(); // lotLoc[x].description // lotLoc[x].lat // lotLoc[x].lng // lotLoc[x].nighbourhood // lotLoc[x].lot var lotLoc = $.evalJSON(data); for(var i=0; i<lotLoc.length; i++) { var spLat = parseFloat(lotLoc[i].lat); var spLng = parseFloat(lotLoc[i].lng); var latlng = new GLatLng(spLat, spLng) markers[i] = new GMarker(latlng, markerOptions); myMap.addOverlay(markers[i]); GEvent.addListener(markers[i], "click", function() { console.log(i); // returning 5 in all cases. // I _need_ this to be unique to the object being clicked. console.log(this); }); } });

    Read the article

  • FTP Reporting Disk Quota Exceeded

    - by Austin
    I am using Notepad++ with FTP_Synchronize to upload files to a server, however, it appears that it is not allowing my file to upload because apparently the "Disk Quota Exceeded" 11:18:49 > -> TYPE I 11:18:49 > Response (200): Type set to I 11:18:49 > -> PASV 11:18:49 > Response (227): Entering Passive Mode (*,*,*,*,*,*). 11:18:50 > -> STOR /home/*/../../var/www/html/test.html 11:18:50 > Response (150): Opening BINARY mode data connection for /home/*/../../var/www/html/test.html 11:18:50 > Response (552): Transfer aborted. Disk quota exceeded Now it may appear that yeah my Disk quota is exceeded, however I've gone to the back-end and saw: Total Used Bandwidth 107.055 MB Allowed Quota 3,000.0 MB Note: Stars were put in place for irrelevant data.

    Read the article

  • Why is PackageInfo.requestedPermissions always null?

    - by Luke
    I'm trying to enumerate all the permissions used by all the installed packages, however when I check the requestedPermissions property for each of my installed packages it is always null. The following is the code that does this: private HashMap<String, List<String>> buildMasterList() { HashMap<String, List<String>> result = new HashMap<String, List<String>>(); List<PackageInfo> packages = getPackageManager().getInstalledPackages(PackageManager.GET_PERMISSIONS); for (int i = 0; i < packages.size(); i++) { String[] packagePermissions = packages.get(i).requestedPermissions; Log.d("AppList", packages.get(i).packageName); if (packagePermissions != null) { for (int j = 0; j < packagePermissions.length; j++) { if (!result.containsKey(packagePermissions[j])) { result.put(packagePermissions[j], new ArrayList<String>()); } result.get(packagePermissions[j]).add(packages.get(i).packageName); } } else { Log.d("AppList", packages.get(i).packageName + ": no permissions"); } } return result; } Edit: Oops! I just needed to pass the PackageManager.GET_PERMISSIONS flag to getInstalledPackages(). Updated the code snippet.

    Read the article

  • I need a few minutes of dedicated server a week, but not for hosting, just to convert ogg etc.

    - by talkingnews
    I'm completely happy with my webhosting, it's just that I need to do one little thing they won't allow, and that's run an instance of Sox to convert about 30 mp3s to ogg files, in various directories, a couple of times a week, to be done automatically in response to the detection of the upload of an mp3. Probably looking at a minute of server time over the whole week. I've had unhelpful suggestions on other forums like "why not leave your home PC on 24 hours a day and then use all your isp bandwidth to do this", which doesn't work for me. I know that I can host files on, say, Amazon S3, but is there something similar for my needs? All it would need to do would be: wget/ftp the mp3 files, convert them to ogg, ftp the files back to my hosting. Of course, all this wouldn't be needed if there was such a thing as a compiled binary of Sox (or any mp3ogg converter) for Centos which I could upload without needing root access, but I've given up asking that one, but always open to suggestions!

    Read the article

  • Can't start the portable version of NetBeans 6.9.1 IDE

    - by Coder
    I downloaded the portable version of netbeans netbeans-6.9.1-201007282301-ml.zip from the netbeans site and changed the config file in etc/netbeans.conf as indicated on the netbeans site. The file contents are below. # ${HOME} will be replaced by JVM user.home system property #netbeans_default_userdir="${HOME}/.netbeans/6.9" netbeans_default_userdir=".netbeans/6.9" # Options used by NetBeans launcher by default, can be overridden by explicit # command line switches: netbeans_default_options="-J-client -J-Xss2m -J-Xms32m -J-XX:PermSize=32m -J-XX:MaxPermSize=200m -J-Dapple.laf.useScreenMenuBar=true -J-Dapple.awt.graphics.UseQuartz=true -J-Dsun.java2d.noddraw=true" # Note that a default -Xmx is selected for you automatically. # You can find this value in var/log/messages.log file in your userdir. # The automatically selected value can be overridden by specifying -J-Xmx here # or on the command line. # If you specify the heap size (-Xmx) explicitely, you may also want to enable # Concurrent Mark & Sweep garbage collector. In such case add the following # options to the netbeans_default_options: # -J-XX:+UseConcMarkSweepGC -J-XX:+CMSClassUnloadingEnabled -J-XX:+CMSPermGenSweepingEnabled # (see http://wiki.netbeans.org/wiki/view/FaqGCPauses) # Default location of JDK, can be overridden by using --jdkhome <dir>: #netbeans_jdkhome="/path/to/jdk" netbeans_jdkhome="C:\Program Files\Java\jdk1.6.0_24\" # Additional module clusters, using ${path.separator} (';' on Windows or ':' on Unix): #netbeans_extraclusters="/absolute/path/to/cluster1:/absolute/path/to/cluster2" # If you have some problems with detect of proxy settings, you may want to enable # detect the proxy settings provided by JDK5 or higher. # In such case add -J-Djava.net.useSystemProxies=true to the netbeans_default_options. But it refuses to start when i try to run it. If i change the JDK path to something incorrect it complains that it can't find the jdk so i think the jdk path is correct. It also creates a .netbeans directory when i try to start it. I don't see any errors and it just doesn't do anything else observable. Does anybody know how to set up this version of netbeans? Thanks.

    Read the article

  • HSQLDB connection error

    - by user1478527
    I created a java program for connect the HSQLDB , the first one works well, public final static String DRIVER = "org.hsqldb.jdbcDriver"; public final static String URL = "jdbc:hsqldb:file:F:/hsqlTest/data/db"; public final static String DBNAME = "SA"; but these are not public final static String DRIVER = "org.hsqldb.jdbcDriver"; public final static String URL = "jdbc:hsqldb:file:C:/Program Files/tich Tools/mos tech/app/data/db/t1/t2/01/db"; public final static String DBNAME = "SA"; the error shows like this: java.sql.SQLException: error in script file line: 1 unexpected token: ? at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.JDBCConnection.<init>(Unknown Source) at org.hsqldb.jdbc.JDBCDriver.getConnection(Unknown Source) at org.hsqldb.jdbc.JDBCDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at HSQLDBManagerImp.getconn(HSQLDBManagerImp.java:48) at TESTHSQLDB.main(TESTHSQLDB.java:15) Caused by: org.hsqldb.HsqlException: error in script file line: 1 unexpected token: ? at org.hsqldb.error.Error.error(Unknown Source) at org.hsqldb.scriptio.ScriptReaderText.readDDL(Unknown Source) at org.hsqldb.scriptio.ScriptReaderBase.readAll(Unknown Source) at org.hsqldb.persist.Log.processScript(Unknown Source) at org.hsqldb.persist.Log.open(Unknown Source) at org.hsqldb.persist.Logger.openPersistence(Unknown Source) at org.hsqldb.Database.reopen(Unknown Source) at org.hsqldb.Database.open(Unknown Source) at org.hsqldb.DatabaseManager.getDatabase(Unknown Source) at org.hsqldb.DatabaseManager.newSession(Unknown Source) ... 7 more Caused by: org.hsqldb.HsqlException: unexpected token: ? at org.hsqldb.error.Error.parseError(Unknown Source) at org.hsqldb.ParserBase.unexpectedToken(Unknown Source) at org.hsqldb.ParserCommand.compilePart(Unknown Source) at org.hsqldb.ParserCommand.compileStatement(Unknown Source) at org.hsqldb.Session.compileStatement(Unknown Source) ... 16 more I googled this connection question , but most of them not help much, someone says that may be the HSQLDB version problem. The dbase shut down in "SHUT COMPRESS" model. Anyone give some advice?

    Read the article

  • MySQL BinLog Statement Retrieval

    - by Jonathon
    I have seven 1G MySQL binlog files that I have to use to retrieve some "lost" information. I only need to get certain INSERT statements from the log (ex. where the statement starts with "INSERT INTO table SET field1="). If I just run a mysqlbinlog (even if per database and with using --short-form), I get a text file that is several hundred megabytes, which makes it almost impossible to then parse with any other program. Is there a way to just retrieve certain sql statements from the log? I don't need any of the ancillary information (timestamps, autoincrement #s, etc.). I just need a list of sql statements that match a certain string. Ideally, I would like to have a text file that just lists those sql statements, such as: INSERT INTO table SET field1='a'; INSERT INTO table SET field1='tommy'; INSERT INTO table SET field1='2'; I could get that by running mysqlbinlog to a text file and then parsing the results based upon a string, but the text file is way too big. It just times out any script I run and even makes it impossible to open in a text editor. Thanks for your help in advance.

    Read the article

  • How to set the build.xml for the Groovy Antbuilder?

    - by Jan
    I want to execute a build.xml (Ant buildfile) from using GMaven (Maven Plugin for inline executing of Groovy in a POM). Since I have to execute the buildfile several times using the maven-antrun-plugin is not an option at the moment. I take a list of properties (environment and machine names) from an xml file and want to execute ant builds for each of those machines. I found the executeTarget method in the javadocs but not how to set the location of the buildfile. How can I do that - and is this enough? What I have looks as follows: <plugin> <groupId>org.codehaus.groovy.maven</groupId> <artifactId>gmaven-plugin</artifactId> <executions> <execution> <id>some ant builds</id> <phase>process-sources</phase> <goals> <goal>execute</goal> </goals> <configuration> <source> def ant = new AntBuilder() def machines = new XmlParser().parse(new File(project.build.outputDirectory + '/MachineList.xml')); machines.children().each { log.info('Creating machine description for ' + it.Id.text() + ' / ' + it.Environment.text()); ant.project.setProperty('Environment',it.Environment.text()); ant.project.setProperty('Machine',it.Id.text()); // What's missing? ant.project.executeTarget('Tailoring'); } log.info('Ant has finished.') </source> </configuration> </execution> </executions> </plugin>

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >