Search Results

Search found 21131 results on 846 pages for 'binary log'.

Page 377/846 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • Git can no longer open emacs as its editor

    - by mwilliams
    I'm running Git version 1.7.3.2 that I built from source, zsh is my shell, and emacs is my editor. Recently I started seeing the following: /usr/local/Cellar/git/1.7.3.2/libexec/git-core/git-sh-setup: line 106: emacs: command not found Could not execute editor My zshrc looks like the following so I can use the Cocoa build and the console binary provided with it. EMACS_HOME="/Applications/Emacs.app/Contents/MacOS" function e() { PATH=$EMACS_HOME/bin:$PATH $EMACS_HOME/Emacs -nw $@ } function ec() { PATH=$EMACS_HOME/bin:$PATH emacsclient -t $@ } function es() { e --daemon=$1 && ec -s $1 } function el() { ps ax|grep Emacs } function ek() { $EMACS_HOME/bin/emacsclient -e '(kill-emacs)' -s $1 } function ecompile() { e -eval "(setq load-path (cons (expand-file-name \".\") load-path))" \ -batch -f batch-byte-compile $@ } alias emacs=e alias emacsclient=ec And I also have export EDITOR="emacs" and have tried adding export GIT_EDITOR="emacs" (and swapping that out with "e") But whatever I try I can't get git to open emacs whenever I need to do a commit or an interactive rebase, etc etc...

    Read the article

  • jquery.append() - only the last element of my list is appended, previous ones are erased

    - by jaes
    Hi, I have a page like this : <div id="daysTable"> <div id="day0" class="day"></div> <div id="day1" class="day"></div> <div id="day2" class="day"></div> <div id="day3" class="day"></div> <div id="day4" class="day"></div> <div id="day5" class="day"></div> <div id="day6" class="day"></div> </div> and some javascript to fill my calendar like this function getWeek(){ $.getJSON("/getWeek",function(events){ var eventHeight = $("#hoursTable > div").height(); var eventWidth = $("#daysTable > div").width(); var startWeek = events[0]// timestamp of the start of the week for(var i = 1; i < events.length; i ++){ $(".day").empty(); var startHour = (events[i].startDate - startWeek)/3600 var duration = (events[i].stopDate - startWeek)/3600 - startHour var dayStart = Math.floor(startHour/24); var startHour = startHour - dayStart * 24 divEvent = $('<div id="event'+events[i].idEvent+'"/>') .width(eventWidth-2) .height(duration*eventHeight) .css("border","1px solid black") .css("margin-top",startHour*eventHeight) .html(events[i].name); divEvent.appendTo("#day"+dayStart); console.log(divEvent); } }); } my problem being : events contain 3 element I'd like to display but only the last is displayed. If I stop my "for" at the first iteration I can see the first div appended, but it seems that if my loop goes for three iteration the two previous are deleted. The console.log() display some "not-anymore" existing element. Any idea ?

    Read the article

  • javaScript .splice() not working on correctly

    - by adardesign
    I am setting a cookie for each navigation container that is clicked on. It sets an array that is joined and set the cookie value. if its clicked again then its removed from the array. It somehow buggy. It only splices after clicking on other elements. and then it behaves weird. Thanks much. var navLinkToOpen; var setNavCookie = function(value){ var isSet = false; var checkCookies = checkNavCookie() setCookieHelper = checkCookies? checkCookies.split(","): []; console.log("value passed", value) for(i in setCookieHelper){ if(value == setCookieHelper[i]){ setCookieHelper.splice(value,1); isSet = true; } } if(!isSet){ setCookieHelper.push(value) } setCookieHelper.join(",") document.cookie = "navLinkToOpen"+"="+setCookieHelper; } var checkNavCookie = function(){ var allCookies = document.cookie.split( ';' ); for (i = 0; i < allCookies.length; i++ ){ temp = allCookies[i].split("=") if(temp[0].match("navLinkToOpen")){ var getValue = temp[1] } } return getValue || false } $(document).ready(function() { $("#LeftNav li").has("b").addClass("navHeader").not(":first").siblings("li").hide() $(".navHeader").click(function(){ $(this).toggleClass("collapsed").nextUntil("li:has('b')").slideToggle(300); setNavCookie($('.navHeader').index($(this))) return false }) console.log("init",document.cookie) var testCookies = checkNavCookie(); if(testCookies){ finalArrayValue = testCookies.split(",") for(i in finalArrayValue){ $(".navHeader").eq(finalArrayValue[i]).toggleClass("collapsed").nextUntil(".navHeader").slideToggle (0); } } });

    Read the article

  • Apache proxy to Lighttpd: changing $_SERVER['HTTP_HOST'] in php

    - by watain
    I have a WordPress blog running on lighttpd-1.4.19, listening on at www00:81. On the same host, apache-2.2.11 listens on port 80, which creates a proxy connection from http://blog.mydomain.org:80 to http://blog.mydomain.org:81. The Apache virtualhost looks as follows: <VirtualHost *:80> ServerName blog.mydomain.org ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://blog.mydomain.org:81/ ProxyPassReverse / http://blog.mydomain.org:81/ </VirtualHost> Using debug.log-request-handling = "enable" I get the following log entry when I browse http://blog.mydomain.org:80 (notice the Host headers): 2010-05-10 08:47:14: (request.c.294) fd: 6 request-len: 853 GET / HTTP/1.1 Host: blog.mydomain.org:81 [...] 2010-05-10 08:47:15: (request.c.294) fd: 8 request-len: 754 GET /wp-content/uploads/2010/01/image.gif?w=280 HTTP/1.1 Host: www00:81 My problem: as far as I know, the PHP environment variable $_SERVER['HTTP_HOST'] is set to that Host header variable. Unfortunately, WordPress uses that variable in their system to create URLs to pictures on the blog. These URLs won't be accessible behind a firewall of course. How can I force the host header to be blog.mydomain.org instead of blog.mydomain.org:81, respectively www00:81? I already added set server.name = "blog.mydomain.org" to my lighttpd.conf, but this didn't work. Any suggestions are appreciated, thank you.

    Read the article

  • Eclipse: Organising Files

    - by someguy
    I want to import a project that I'm planning to build upon. The problem is that it is very messy; with source files, class files and libraries under one directory. How would I organise these files using Eclipse? I know you can change the source folder and output folder, but when I do change the source folder, the files that I want inside it do not physically move to that folder. Output folder is fine, though. Also, I would like a separate folder for libraries. I'm not sure how I would go about this, however. Here's how I would like it: src: This folder will contain source files. bin: This folder will contain binary (class) files. lib: This folder will contain external libraries.

    Read the article

  • Bootstarap: want to trigger some costum events on layout change

    - by DS_web_developer
    So, there are some events in my app that changes layout on my page (re-position of the elements)... mostly is done by bootstrap collapsing and fading (tabs, collapsibles, accordions)... I would like to fire an event whenever a change is about to happen and another when the change is done.. right now I came out with something like that $('.collapse').on("shown hidden", function(){ jQuery(myAPP).trigger("layoutchanged"); }); $('.collapse').on("show hide", function(){ jQuery(myAPP).trigger("layoutchanging"); }); and then... jQuery(myAPP).on("layoutchanging", function(e){ log("Start changing"); }); jQuery(myAPP).on("layoutchanged", function(e){ log("Layout changed"); }); it works for collapse and accordions OK. but on tabs, where the markup is like this: <ul class="nav nav-tabs can_deactivate"> <li><a href="#tab_1" data-toggle="tab">Open Tab 1</a></li> <li><a href="#tab_2" data-toggle="tab">Open Tab 2</a></li> </ul> <div class="tab-content"> <div class="tab-pane fade" id="tab_1"> Lorem ipsum </div> <div class="tab-pane fade" id="tab_2"> Lorem ipsum </div> </div> Works only on show, but not on hide... what can I do? JS FIDDLE: http://jsfiddle.net/KL7Af/

    Read the article

  • Are there supposed to be more restrictions on operator->* overloads?

    - by Potatoswatter
    I was perusing section 13.5 after refuting the notion that built-in operators do not participate in overload resolution, and noticed that there is no section on operator->*. It is just a generic binary operator. Its brethren, operator->, operator*, and operator[], are all required to be non-static member functions. This precludes definition of a free function overload to an operator commonly used to obtain a reference from an object. But the uncommon operator->* is left out. In particular, operator[] has many similarities. It is binary (they missed a golden opportunity to make it n-ary), and it accepts some kind of container on the left and some kind of locator on the right. Its special-rules section, 13.5.5, doesn't seem to have any actual effect except to outlaw free functions. (And that restriction even precludes support for commutativity!) So, for example, this is perfectly legal (in C++0x, remove obvious stuff to translate to C++03): #include <utility> #include <iostream> #include <type_traits> using namespace std; template< class F, class S > typename common_type< F,S >::type operator->*( pair<F,S> const &l, bool r ) { return r? l.second : l.first; } template< class T > T & operator->*( pair<T,T> &l, bool r ) { return r? l.second : l.first; } template< class T > T & operator->*( bool l, pair<T,T> &r ) { return l? r.second : r.first; } int main() { auto x = make_pair( 1, 2.3 ); cerr << x->*false << " " << x->*4 << endl; auto y = make_pair( 5, 6 ); y->*(0) = 7; y->*0->*y = 8; // evaluates to 7->*y = y.second cerr << y.first << " " << y.second << endl; } I can certainly imagine myself giving into temp[la]tation. For example, scaled indexes for vector: v->*matrix_width[5] = x; Did the standards committee forget to prevent this, was it considered too ugly to bother, or are there real-world use cases?

    Read the article

  • To what extent should code try to explain fatal exceptions?

    - by Andrzej Doyle
    I suspect that all non-trivial software is likely to experience situations where it hits an external problem it cannot work around and thus needs to fail. This might be due to bad configuration, an external server being down, disk full, etc. In these situations, especially if the software is running in non-interactive mode, I expect that all one can really do is log an error and wait for the admin to read the logs and fix the problem. If someone happens to interact with the software in the meantime, e.g. a request comes in to a server that failed to initialize properly, then perhaps an appropriate hint can be given to check the logs and maybe even the error can be echoed (depending on whether you can tell if they're a technical guy as opposed to a business user). For the moment though let's not think too hard about this part. My question is, to what extent should the software be responsible for trying to explain the meaning of the fatal error? In general, how much competence/knowledge are you allowed to presume on administrators of the software, and how much should you include troubleshooting information and potential resolution steps when logging fatal errors? Of course if there's something that's unique to the runtime context this should definitely be logged; but lets assume your software needs to talk to Active Directory via LDAP and gets back an error "[LDAP: error code 49 - 80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 525, vece]". Is it reasonable to assume that the maintainers will be able to Google the error code and work out what it means, or should the software try to parse the error code and log that this is caused by an incorrect user DN in the LDAP config? I don't know if there is a definitive best-practices answer for this, so I'm keen to hear a variety of views.

    Read the article

  • Managed bean property value not set to null

    - by Vladimir
    Hi! I'm new to JSF, so this question might be strange. I have an inputText component's value bound to managed bean's property of type Float. I need to set property to null when inputText field is empty, not to 0 value. It's not done by default, so I added converter with the following method implemented: public Object getAsObject(FacesContext arg0, UIComponent arg1, String arg2) throws ConverterException { if (StringUtils.isEmpty(arg2)) { return null; } float result = Float.parseFloat(arg2); if (result == 0) { return null; } return result; } I registered converter, and assigned it to inputText component. I logged arg2 argument, and also logged return value from getAsObject method. By my log I can see that it returns null value. But, I also log setter property on backing bean and argument is 0 value, not null as expected. To be more precise, it is setter property is called twice, once with null argument, second time with 0 value argument. It still sets backing bean value to 0. How can I set value to null? Thanks in advance.

    Read the article

  • How can a wix custom action dll call be made to use the debug runtime via a merge module?

    - by Benj
    I'm trying to create a debug build with a corresponding debug installer for our product. I'm new to Wix so please forgive any naivety contained herein. The debug Dlls in my project are dependent on both the VS2008 and the VS2008SP1 debug runtimes. I've created a merge module feature in wix to bundle those runtimes with my installer. <Include xmlns="http://schemas.microsoft.com/wix/2006/wi"> <!-- Include our 'variables' file --> <!--<?include variables.wxi ?>--> <!--<Fragment>--> <DirectoryRef Id="TARGETDIR"> <!-- Always install the 32 bit ATL/CRT libraries, but only install the 64 bit ones on a 64 bit build --> <Merge Id="AtlFiles_x86" SourceFile="$(env.CommonProgramFiles)\Merge Modules\Microsoft_VC90_ATL_x86.msm" DiskId="1" Language="1033"/> <Merge Id="AtlPolicy_x86" SourceFile="$(env.CommonProgramFiles)\Merge Modules\policy_9_0_Microsoft_VC90_ATL_x86.msm" DiskId="1" Language="1033"/> <Merge Id="CrtFiles_x86" SourceFile="$(env.CommonProgramFiles)\Merge Modules\Microsoft_VC90_DebugCRT_x86.msm" DiskId="1" Language="1033"/> <Merge Id="CrtPolicy_x86" SourceFile="$(env.CommonProgramFiles)\Merge Modules\policy_9_0_Microsoft_VC90_DebugCRT_x86.msm" DiskId="1" Language="1033"/> <Merge Id="MfcFiles_x86" SourceFile="$(env.CommonProgramFiles)\Merge Modules\Microsoft_VC90_DebugMFC_x86.msm" DiskId="1" Language="1033"/> <Merge Id="MfcPolicy_x86" SourceFile="$(env.CommonProgramFiles)\Merge Modules\policy_9_0_Microsoft_VC90_DebugMFC_x86.msm" DiskId="1" Language="1033"/> <!-- If this is a 64 bit build, install the relevant modules --> <?if $(env.Platform) = "x64" ?> <Merge Id="AtlFiles_x64" SourceFile="$(env.CommonProgramFiles)\Merge Modules\Microsoft_VC90_ATL_x86_x64.msm" DiskId="1" Language="1033"/> <Merge Id="AtlPolicy_x64" SourceFile="$(env.CommonProgramFiles)\Merge Modules\policy_9_0_Microsoft_VC90_ATL_x86_x64.msm" DiskId="1" Language="1033"/> <Merge Id="CrtFiles_x64" SourceFile="$(env.CommonProgramFiles)\Merge Modules\Microsoft_VC90_DebugCRT_x86_x64.msm" DiskId="1" Language="1033"/> <Merge Id="CrtPolicy_x64" SourceFile="$(env.CommonProgramFiles)\Merge Modules\policy_9_0_Microsoft_VC90_DebugCRT_x86_x64.msm" DiskId="1" Language="1033"/> <Merge Id="MfcFiles_x64" SourceFile="$(env.CommonProgramFiles)\Merge Modules\Microsoft_VC90_DebugMFC_x86_x64.msm" DiskId="1" Language="1033"/> <Merge Id="MfcPolicy_x64" SourceFile="$(env.CommonProgramFiles)\Merge Modules\policy_9_0_Microsoft_VC90_DebugMFC_x86_x64.msm" DiskId="1" Language="1033"/> <?endif?> </DirectoryRef> <Feature Id="MS2008_SP1_DbgRuntime" Title="VC2008 Debug Runtimes" AllowAdvertise="no" Display="hidden" Level="1"> <!-- 32 bit libraries --> <MergeRef Id="AtlFiles_x86"/> <MergeRef Id="AtlPolicy_x86"/> <MergeRef Id="CrtFiles_x86"/> <MergeRef Id="CrtPolicy_x86"/> <MergeRef Id="MfcFiles_x86"/> <MergeRef Id="MfcPolicy_x86"/> <!-- 64 bit libraries --> <?if $(env.Platform) = "x64" ?> <MergeRef Id="AtlFiles_x64"/> <MergeRef Id="AtlPolicy_x64"/> <MergeRef Id="CrtFiles_x64"/> <MergeRef Id="CrtPolicy_x64"/> <MergeRef Id="MfcFiles_x64"/> <MergeRef Id="MfcPolicy_x64"/> <?endif?> </Feature> <!--</Fragment>--> </Include> If I'm doing a debug build of the installer, I include that feature like so: <!-- The 'Feature' that contains the debug CRT/ATL libraries --> <?if $(var.Configuration) = "Debug"?> <?include ..\includes\MS2008_SP1_DbgRuntime.wxi?> <?endif?> The only problem is that my installer also includes a custom action which is also dependent on the debug runtime: <!-- Private key installer --> <Binary Id="InstallPrivateKey" SourceFile="..\InstallPrivateKey\win32\$(var.Configuration)\InstallPrivateKey.dll"></Binary> <CustomAction Id='InstallKey' BinaryKey='InstallPrivateKey' DllEntry='InstallPrivateKey'/> So how can I package the debug run time in such a way that the custom action also has access to it?

    Read the article

  • Setting subversion "password-stores" does nothing?

    - by Coderer
    The Subversion documentation says that I can set a parameter in ~/.subversion/config like [auths] password-stores = gnome-keyring to have it cache my certificate password in gnome-keyring. I set the option, and nothing happens -- no error messages, no change in behavior, nothing. Maybe I'm missing a log somewhere? I know subversion has to be compiled to support this but AFAIK I'm using the RPM version, which (they say...) ships with it rolled in. Is there a way to check whether my binary supports keyring? Shouldn't it say something if it doesn't?

    Read the article

  • gpg error - connection already closed?

    - by OopsForgotMyOtherUserName
    omg... hope someone can help me because I am so lost as to what to try next.... I don't know what is causing the error to happen, and I don't see how to figure it out... Keep going between the pgloader.conf examples and what I have, and I don't understand why I keep getting the 'connection already closed' error? The first few lines of my fr.conf is at the very end... I'd really appreciate / love some guidance here... Been trying to get this thing going all morning, and am even getting stuck just on this part... Running this command at the command line: /usr/bin/pgloader -c /var/mybin/pgconfs/fr.conf Yields this in the pgloader.log (with the process just hanging) more /tmp/pgloader.log 27-03-2010 12:22:53 pgloader INFO Logger initialized 27-03-2010 12:22:53 pgloader INFO Reformat path is ['/usr/share/python-support/pgloader/reformat'] 27-03-2010 12:22:53 pgloader INFO Will consider following sections: 27-03-2010 12:22:53 pgloader INFO fixed 27-03-2010 12:22:54 fixed INFO fixed processing 27-03-2010 12:22:54 pgloader INFO All threads are started, wait for them to terminate 27-03-2010 12:22:57 fixed ERROR connection already closed 27-03-2010 12:22:57 fixed INFO closing current database connection [pgsql] host = localhost port = 5432 base = frdb user = username pass = password [fixed] table = fr format = fixed filename = /var/www/fr.txt ...

    Read the article

  • Stack overflow while working with CFBuilder plugin

    - by lynxoid
    In the past 30 minutes of working in CFBuilder (I have it as an Eclipse Plug in), I got this error 4 times: A stack overflow has occurred. You are recommended to exit the workbench. Subsequent errors may happen and may terminate the workbench without warning. See the .log file for more details. Do you want to exit workbench?. together with: Unhandled event loop exception java.lang.StackOverflowError The log file had this: !ENTRY org.eclipse.ui 4 0 2010-05-11 09:41:51.951 !MESSAGE Unhandled event loop exception !STACK 0 java.lang.StackOverflowError at java.util.Arrays.mergeSort(Unknown Source) at java.util.Arrays.mergeSort(Unknown Source) at java.util.Arrays.mergeSort(Unknown Source) at java.util.Arrays.mergeSort(Unknown Source) at java.util.Arrays.mergeSort(Unknown Source) at java.util.Arrays.sort(Unknown Source) at com.adobe.ide.cfml.parser.generated.CFMLParserBase.getVariableInfo(CFMLParserBase.java:1613) at com.adobe.ide.cfml.parser.generated.CFMLParserBase.getVariableInfo(CFMLParserBase.java:1603) at com.adobe.ide.editor.model.CFMLDOMUtils.getVariable(CFMLDOMUtils.java:2375) at com.adobe.ide.editor.model.CFMLDOMUtils.getComponentNameFromNode(CFMLDOMUtils.java:2484) at com.adobe.ide.editor.model.CFMLDOMUtils.getComponentNameFromFunctionCall(CFMLDOMUtils.java:2168) at com.adobe.ide.editor.model.CFMLDOMUtils.getComponentNameFromNode(CFMLDOMUtils.java:2495) at com.adobe.ide.editor.model.CFMLDOMUtils.getComponentNameFromFunctionCall(CFMLDOMUtils.java:2168) at com.adobe.ide.editor.model.CFMLDOMUtils.getComponentNameFromNode(CFMLDOMUtils.java:2495) at com.adobe.ide.editor.model.CFMLDOMUtils.getComponentNameFromFunctionCall(CFMLDOMUtils.java:2168) (and so on - repeat n times) It happens whenever I copy/paste something. Does anyone know what is going on?

    Read the article

  • With NHibernate, how can I add a child object when updating a parent object?

    - by BMZ
    I have a simple Parent/Child relationship between a Person object and an Address object. The Person object exists in the DB. After doing a Get on the Person, I add a new Address object to the Address sub-object list of the parent, and do some other updates to the Person object. Finally, I do an Update on the Person object. With a SQL trace window, I can see the update to the Person object to the Person table and the Insert of the Address record to the Address table. The issue is that, after the update is performed, the AddressId (primary key on the Address object) is still set to 0, which is what it defaults to when you first initialize the Address object. I have verified that when I do an Add, this value is set correctly. Is this a known issue when trying to add sub-objects as part of an NHibernate UPDATE? Sample code and mapping files are below Thanks <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <class name="BusinessEntities.Wellness.Person,BusinessEntities.Wellness" table="Person" lazy="true" dynamic-insert="true" dynamic-update="false"> <id name="Personid" column="PersonID" type="int"> <generator class="native" /> </id> <version type="binary" generated="always" name="RecordVersion" column="`RecordVersion`"/> <property type="int" not-null="true" name="Customerid" column="`CustomerID`" /> <property type="AnsiString" not-null="true" length="9" name="Ssn" column="`SSN`" /> <property type="AnsiString" not-null="true" length="30" name="FirstName" column="`FirstName`" /> <property type="AnsiString" not-null="true" length="35" name="LastName" column="`LastName`" /> <property type="AnsiString" length="1" name="MiddleInitial" column="`MiddleInitial`" /> <property type="DateTime" name="DateOfBirth" column="`DateOfBirth`" /> <bag name="PersonAddresses" inverse="true" lazy="true" cascade="all"> <key column="PersonID" /> <one-to-many class="BusinessEntities.Wellness.PersonAddress,BusinessEntities.Wellness" / </bag> </class> </hibernate-mapping> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <class name="BusinessEntities.Wellness.PersonAddress,BusinessEntities.Wellness" table="PersonAddress" lazy="true" dynamic-insert="true" dynamic-update="false"> <id name="PersonAddressId" column="PersonAddressID" type="int"> <generator class="native" /> </id> <version type="binary" generated="always" name="RecordVersion" column="`RecordVersion`" /> <property type="AnsiString" not-null="true" length="1" name="AddressTypeid" column="`AddressTypeID`" /> <property type="AnsiString" not-null="true" length="60" name="AddressLine1" column="`AddressLine1`" /> <property type="AnsiString" length="60" name="AddressLine2" column="`AddressLine2`" /> <property type="AnsiString" length="60" name="City" column="`City`" /> <property type="AnsiString" length="2" name="UsStateId" column="`USStateID`" /> <property type="AnsiString" length="5" name="UsPostalCodeId" column="`USPostalCodeID`" /> <many-to-one name="Person" cascade="none" column="PersonID" /> </class> </hibernate-mapping> Person newPerson = new Person(); newPerson.PersonName = "John Doe"; newPerson.SSN = "111111111"; newPerson.CreatedBy = "RJC"; newPerson.CreatedDate = DateTime.Today; personDao.AddPerson(newPerson); Person updatePerson = personDao.GetPerson(newPerson.PersonId); updatePerson.PersonAddresses = new List<PersonAddress>(); PersonAddress addr = new PersonAddress(); addr.AddressLine1 = "1 Main St"; addr.City = "Boston"; addr.State = "MA"; addr.Zip = "12345"; updatePerson.PersonAddresses.Add(addr); personDao.UpdatePerson(updatePerson); int addressID = updatePerson.PersonAddresses[0].AddressId;

    Read the article

  • how to send image to remote server using webservices in android only save to byte array

    - by satyamurthy
    get image from sdcard and store that image to remote server. i am getting the image from sdcard and i converterd that image to bytearray by using bitmap .but what's the problem if i oberver byte array it is showing some different values it is not matching with .net image byte array conversion. can u pl help if you have any solution it is very urgent to me following is the code i am using can u pl suggest me FileInputStream fin = new FileInputStream(new File("/sdcard/pictures/1.jpg")); BufferedInputStream bis = new BufferedInputStream(fin,3000); byte[] data = new byte[bis.available()]; bis.read(data, 0, data.length); byte[] data1=new byte[data.length]; for (int i = 0; i < data.length; i++) { System.out.print(data[i]); data1[i]=data[i]; } System.out.println("5..................."+data1); Bitmap bitmap = BitmapFactory.decodeByteArray(data1,0,data1.length); System.out.println("6..................."+data1.length); Log.v("hgfjohfjghjdfhgj",""+bitmap); if(bitmap!=null) image.setImageBitmap(bitmap); else Log.e("Bitmap "," Not Created");

    Read the article

  • Rails runner command not saving to cache

    - by mark
    Hi I'm having a bit of a problem with a cron task generated by rails whenever plugin that should store remote data in the rails cache for display. What I have is this: schedule.rb set :path, '/var/www/apps/tuexplore/current' every 1.hour do runner "Weather.cache_remote", :environment => :production end calls this model class Weather def self.cache_remote Rails.cache.write('weather_data', Net::HTTP.get_response(URI.parse(WEATHER_URL)).body) end end Calling whenever returns this PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/deploy/.gem/ruby/1.8/bin 0 * * * * /var/www/apps/tuexplore/current/script/runner -e production "Weather.cache_remote" This doesn't work. Calling the weather model method from a controller works fine, but I need to schedule it hourly. The cron task causes a "Cache write: weather_data" entry to appear in the production log but data isn't stored nor output into the page. Additional information, I can log into production console and run Weather.cache_remote, then read the data from the rails cache. I'd be really appreciative if someone could point out the error of my ways. If further explanation is needed please ask. Thanks in advance for any pointers.

    Read the article

  • Are there ways to write php/python code to run as hooks in the Apache Request Processing pipeline?

    - by SB
    Does anybody know of any modules that provide the functionality to write python or PHP code to run as hooks in the Apache request processing pipeline? For instance, mod_perl lets me write PerlModules, which can contain handlers for the header parsing phase, content delivery, and even filters. I would like to do something similar in other scripting languages. I could write it in C, but the goal is to deploy a module that would work across a number of systems. If I deliver it as binary in C, then it would require 64/32-bit versions and some other issues. With perl, I can just require certain modules installed and mod_perl2.

    Read the article

  • How to get a list of all Subversion commit author usernames?

    - by Quinn Taylor
    I'm looking for an efficient way to get the list of unique commit authors for an SVN repository as a whole, or for a given resource path. I haven't been able to find an SVN command specifically for this (and don't expect one) but I'm hoping there may be a better way that what I've tried so far in Terminal (on OS X): svn log --quiet | grep "^r" | awk '{print $3}' svn log --quiet --xml | grep author | sed -E "s:</?author>::g" Either of these will give me one author name per line, but they both require filtering out a fair amount of extra information. They also don't handle duplicates of the same author name, so for lots of commits by few authors, there's tons of redundancy flowing over the wire. More often than not I just want to see the unique author usernames. (It actually might be handy to infer the commit count for each author on occasion, but even in these cases it would be better if the aggregated data were sent across instead.) I'm generally working with client-only access, so svnadmin commands are less useful, but if necessary, I might be able to ask a special favor of the repository admin if strictly necessary or much more efficient. The repositories I'm working with have tens of thousands of commits and many active users, and I don't want to inconvenience anyone.

    Read the article

  • Debugging Post Request with Chrome Dev Tools

    - by benek
    I am trying to use Chrome Dev for debugging the following Angular post request : $http.post("http://picjboss.puma.lan:8880/fluxpousse/api/flow/createOrUpdateHeader", flowHeader) After running the statement with right-click / evaluate, I can see the post in the network panel with a pending state. How can I get the result or "commit" the request and leave easily this "pending" state from the dev console ? I am not yet very familiar with JS callbacks, some code is expected. Thanks. EDIT I have tried to run from the console : $scope.$apply(function(){$http.post("http://picjboss.puma.lan:8880/fluxpousse/api/flow/createOrUpdateHeader", flowHeader).success(function(data){console.log("error "+data)}).error(function(data){console.log("error "+data)})}) It returns : undefined EDIT The post I am trying to solve generate an HTTP 400. Here is the result : Request URL:http://picjboss.puma.lan:8880/fluxpousse/api/flow/createOrUpdateHeader Request Method:POST Status Code:400 Mauvaise Requ?te Request Headersview source Accept:application/json, text/plain, / Accept-Encoding:gzip,deflate,sdch Accept-Language:fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Content-Length:5354 Content-Type:application/json;charset=UTF-8 Cookie:JSESSIONID=285AF523EA18C0D7F9D581CDB2286C56 Host:picjboss.puma.lan:8880 Origin:http://picjboss.puma.lan:8880 Referer:http://picjboss.puma.lan:8880/fluxpousse/ User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36 X-Requested-With:XMLHttpRequest Request Payloadview source {refHeader:IDSFP, idEntrepot:619, codeEntreprise:null, codeBanniere:null, codeArticle:7,…} cessionPrice: 78 codeArticle: "7" codeBanniere: null codeDateAppro: null codeDateDelivery: null codeDatePrepa: null codeEntreprise: null codeFournisseur: null codeUtilisateur: null codeUtilisateurLastUpdate: null createDate: null dateAppro: null dateDelivery: null datePrepa: null hasAssortControl: null hasCadenceForce: null idEntrepot: 619 isFreeCost: null labelArticle: "Mayonnaise de DIJON" labelFournisseur: null listDetail: [,…] pcbArticle: 12 pvc: 78 qte: 78 refCommande: "ref" refHeader: "IDSFP" state: "CREATED" stockArticle: 1200 updateDate: null Response Headersview source Connection:close Content-Length:996 Content-Type:text/html;charset=utf-8 Date:Fri, 08 Nov 2013 15:19:30 GMT Server:Apache-Coyote/1.1 X-Powered-By:Servlet 2.5; JBoss-5.0/JBossWeb-2.1

    Read the article

  • Castle, sharing a transient component between a decorator and a decorated component

    - by Marius
    Consider the following example: public interface ITask { void Execute(); } public class LoggingTaskRunner : ITask { private readonly ITask _taskToDecorate; private readonly MessageBuffer _messageBuffer; public LoggingTaskRunner(ITask taskToDecorate, MessageBuffer messageBuffer) { _taskToDecorate = taskToDecorate; _messageBuffer = messageBuffer; } public void Execute() { _taskToDecorate.Execute(); Log(_messageBuffer); } private void Log(MessageBuffer messageBuffer) {} } public class TaskRunner : ITask { public TaskRunner(MessageBuffer messageBuffer) { } public void Execute() { } } public class MessageBuffer { } public class Configuration { public void Configure() { IWindsorContainer container = null; container.Register( Component.For<MessageBuffer>() .LifeStyle.Transient); container.Register( Component.For<ITask>() .ImplementedBy<LoggingTaskRunner>() .ServiceOverrides(ServiceOverride.ForKey("taskToDecorate").Eq("task.to.decorate"))); container.Register( Component.For<ITask>() .ImplementedBy<TaskRunner>() .Named("task.to.decorate")); } } How can I make Windsor instantiate the "shared" transient component so that both "Decorator" and "Decorated" gets the same instance? Edit: since the design is being critiqued I am posting something closer to what is being done in the app. Maybe someone can suggest a better solution (if sharing the transient resource between a logger and the true task is considered a bad design)

    Read the article

  • Homebrew build with different arch?

    - by StasM
    I tried to install mysql-connector-c recipe via homebrew, and it builds just fine, but produces x86_64 library: $file ~/brew/lib/libmysql.dylib .../brew/lib/libmysql.dylib: Mach-O 64-bit dynamically linked shared library x86_64 I however need i386 library for my project. I tried to give it CFLAGS and LDFLAGS like this: CFLAGS="-arch i386 -arch x86_64" LDFLAGS="-arch i386 -arch x86_64" brew install mysql-connector-c but nothing changes - it still builds x86_64 only binary. Is there any way to make homebrew build either dual arch library or i386 library? I have kernel architecture set to x86_64, if it matters.

    Read the article

  • Why is log4j not behaving as expected?

    - by Kieveli
    I have a co-worker who is trying to get log4j to behave as follows: Log to Stdout By default, disable most output Show only messages from java.sql.PrepareStatement at level debug and up He's getting caught up in the 'level' vs 'priority'. Here is his config file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE log4j:configuration SYSTEM "D:/Java/apache-log4j-1.2.15/src/main/resources/org/apache/log4j/xml/log4j.dtd" > <log4j:configuration> <!-- Appenders --> <appender name="stdout" class="org.apache.log4j.ConsoleAppender"> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%5p %d{ISO8601} [%t][%x] %c - %m%n" /> </layout> </appender> <!-- Loggers for ibatus and JDBC database --> <logger name="java.sql.PreparedStatement"> <level value="debug"/> </logger> <!-- The Root Logger --> <root> <level value="error"/> <appender-ref ref="stdout"/> </root> </log4j:configuration> The output from this shows no messages in the log output. How does he need to change his log4j.xml config file to make it behave as he's expecting?

    Read the article

  • Getting unhandled error and connection get lost when a client tries to communicate with chat server in twisted

    - by user2433888
    from twisted.internet.protocol import Protocol,Factory from twisted.internet import reactor class ChatServer(Protocol): def connectionMade(self): print "A Client Has Connected" self.factory.clients.append(self) print"clients are ",self.factory.clients self.transport.write('Hello,Welcome to the telnet chat to sign in type aim:YOUR NAME HERE to send a messsage type msg:YOURMESSAGE '+'\n') def connectionLost(self,reason): self.factory.clients.remove(self) self.transport.write('Somebody was disconnected from the server') def dataReceived(self,data): #print "data is",data a = data.split(':') if len(a) > 1: command = a[0] content = a[1] msg="" if command =="iam": self.name + "has joined" elif command == "msg": ma=sg = self.name + ":" +content print msg for c in self.factory.clients: c.message(msg) def message(self,message): self.transport.write(message + '\n') factory = Factory() factory.protocol = ChatServer factory.clients = [] reactor.listenTCP(80,factory) print "Iphone Chat server started" reactor.run() The above code is running succesfully...but when i connect the client (by typing telnet localhost 80) to this chatserver and try to write message ,connection gets lost and following errors occurs : Iphone Chat server started A Client Has Connected clients are [<__main__.ChatServer instance at 0x024AC0A8>] Unhandled Error Traceback (most recent call last): File "C:\Python27\lib\site-packages\twisted\python\log.py", line 84, in callWithLogger return callWithContext({"system": lp}, func, *args, **kw) File "C:\Python27\lib\site-packages\twisted\python\log.py", line 69, in callWithContext return context.call({ILogContext: newCtx}, func, *args, **kw) File "C:\Python27\lib\site-packages\twisted\python\context.py", line 118, in callWithContext return self.currentContext().callWithContext(ctx, func, *args, **kw) File "C:\Python27\lib\site-packages\twisted\python\context.py", line 81, in callWithContext return func(*args,**kw) --- --- File "C:\Python27\lib\site-packages\twisted\internet\selectreactor.py", line 150, in _doReadOrWrite why = getattr(selectable, method)() File "C:\Python27\lib\site-packages\twisted\internet\tcp.py", line 199, in doRead rval = self.protocol.dataReceived(data) File "D:\chatserverultimate.py", line 21, in dataReceived content = a[1] exceptions.IndexError: list index out of range Where am I going wrong?

    Read the article

  • Unit testing, mocking - simple case: Service - Repository

    - by rafek
    Consider a following chunk of service: public class ProductService : IProductService { private IProductRepository _productRepository; // Some initlization stuff public Product GetProduct(int id) { try { return _productRepository.GetProduct(id); } catch (Exception e) { // log, wrap then throw } } } Let's consider a simple unit test: [Test] public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() { var product = EntityGenerator.Product(); _productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product); Product returnedProduct = _productService.GetProduct(product.Id); Assert.AreEqual(product, returnedProduct); _productRepositoryMock.VerifyAll(); } At first it seems that this test is ok. But let's change our service method a little bit: public Product GetProduct(int id) { try { var product = _productRepository.GetProduct(id); product.Owner = "totallyDifferentOwner"; return product; } catch (Exception e) { // log, wrap then throw } } How to rewrite a given test that it'd pass with the first service method and fail with a second one? How do you handle this kind of simple scenarios? HINT: A given test is bad coz product and returnedProduct is actually the same reference.

    Read the article

  • Mac firewall blocking nginx (port 80) from external side

    - by Alex Ionescu
    I installed nginx using ports and started it with sudo. Accessing the nginx welcome page from localhost works perfectly, however accessing it from an external computer fails. Doing an nmap on the computer from the outside reveals 80/tcp filtered http So clearly the mac firewall is blocking the port. I then proceed to add the nginx executable to the firewall exception list as seen in this image, however the nmap still shows up as port 80 being filtered and I'm unable to access the webpage. The exact binary that is in the list is /opt/local/sbin/nginx which to my knowledge seems correct Any ideas what I should do? Thanks! P.S. Turning the firewall off does allow me to access the website from the outside world, however that isn't an ideal solution.

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >