Search Results

Search found 5492 results on 220 pages for 'git fetch'.

Page 172/220 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • parse JSON . I have server response

    - by GauravBOSS
    i am new in JSON Parsing . i have a server response how i can fetch the "Device Name and Id" of 0 index. thanks in Advance { Successfully = ( { 0 = { DeviceName = Tommy; DeviceTypeId = 1; EMEI = xxxxxx; GId = xxxxx; Id = 105; Pet = "<null>"; PetImage = "352022008228784.jpg"; ProtocolId = xxxx; SimNo = xxxxx; }; } ); }

    Read the article

  • WordPress + Facebook comments addition (php knowledge needed)

    - by user1356223
    I succeded to fetch Facebok comments number via this function: <?php function fb_comment_count() { global $post; $url = get_permalink($post->ID); $filecontent = file_get_contents('https://graph.facebook.com/?ids=' . $url); $json = json_decode($filecontent); $count = $json->$url->comments; if ($count == 0 || !isset($count)) { $count = 0; } echo $count; } ;?> And I call it with: <?php fb_comment_count();?> Now how do I add it to this code: <?php comments_number(__('No Comments'), __('1 Comment'), __('% Comments'), '', __('Comments Closed') ); ?> so that WordPress shows number of WP and FB comments together in one number. Thank you very much to everyone!

    Read the article

  • PHP: codecomments inside functions prevents it work

    - by Karem
    $query = $connect->prepare("SELECT firstname, lastname FROM users WHERE id = '$id'"); $query->execute(); $row = $query->fetch(); // $full_name = $row["firstname"] . " ".$row["lastname"]; $full_name = $row["firstname"] . " ".substr($row["lastname"], 0, 1)."."; return $full_name; If i remove the line that is a comment ( // ), it will return $full_name, if its there then it wont work. I also tried commenting with #, but it still wont work(wont return anything) as soon as there is a codecomment weird issue

    Read the article

  • GQL, Aggregation and Order By

    - by Koran
    Hi, How can GQL support ORDER BY when it does not support aggregation? The question is - if say the result of the query is more than 1000, does ORDER BY return fully ordered list or only the first 1000 items which is then ordered? To explain the question more: is conceptually MIN() same as query.orderby('asc').fetch(1)? If it is properly ordering the list, then how can it not provide COUNT(), since to properly order the list, GQL possibly has to parse through the whole list - in which case, COUNT() is not an issue at all? Or is item indexed and kept in some type of tree so that it does not need to parse it all the time?

    Read the article

  • Wrong file encoding after Dist::Zilla

    - by xenoterracide
    How can I get mojibake to pass? this might be a bug in the contributors plugin. The character does not render correctly in perldoc, but does in my vim and in the extracted git log. # Failed test 'Mojibake test for blib/lib/Pod/Spell.pm' # at /home/xenoterracide/perl5/perlbrew/perls/perl-5.18.1/lib/site_perl/5.18.1/Test/Mojibake.pm line 168. # Non-UTF-8 unexpected in blib/lib/Pod/Spell.pm, line 431 (POD) here's a snippet from the source which should probably be looked at directly due to copy-paste maybe not catching an encoding issue. =item * Olivier Mengué <[email protected]> =back A little more vim exploration shows that :set filencoding is being changed to latin1 editing the file in vim seems to fix this, but since the file is being generated, how can I get it generated with the correct encoding?

    Read the article

  • Fastest way to deploy rails apps with Passenger

    - by yuval
    I am working on a Dreamhost server with Rails 2.3.5. Every time I make changes to a site, I have to ssh into the site, remove all the files, upload a zip file containing all the new files for the site, unzip that file, migrate the database, and go. Something tells me there's a faster way to deploy rails apps. I am using mac Time Machine to keep track of different versions of my applications. I know git tracks files, but I don't really know how to work with it to deploy my applications, since passenger takes care of all the magic for me. What would be a faster way to deploy my applications (and avoid the downtime associated with my method when I delete all files on the server)? Thanks!

    Read the article

  • Manipulating source packages from Hackage how to easy deploy to several windowsboxes?

    - by Jonke
    Recently when I have found good sources packages for ghc 6.12/6.10 on Hackage I've been forced to do some minor or major changes to the cabal files to make those packages to work under windows. Besides to fork and merge my fixes with github, what seems to be the best way/ good enough practice to take these modified builds to a couple of other windows boxes that only has a basic haskell platform installed? I should prefer if I somehow could work with the cabal-install because that is what one normally use. Should one put the modfied build dirs on a shared/networked dir and mount from the targeted windows box? Say something like this: on machine prepare cabal fetch foo cabal unpack foo cd foo edit .cabal and .hs cabal configure cabal build On machine useanddevelopnormal cd machinepreparemount cd foo cabal install

    Read the article

  • Source control Branching needs

    - by Mükremin
    Hello, we are creating hospital information system software. The project will be different hospital to hospital and contain different use cases. But lots of parts will be the same. So we will use branching mechanism of the source control. If we find a bug in one hospital, how can we know the other branches have the same bug or not. Resim The numbers in the picture which we attached show the each hospital software. Do you have a solution about this problem ? Which source control(SVN,Git,Hg) we will be suitable about this problem ? Thank you.!

    Read the article

  • Using "CASE" in Where clause to choose various column harm the performance

    - by zivgabo
    I have query which needs to be dynamic on some of the columns, meaning I get a parameter and according its value I decide which column to fetch in my Where clause. I've implemented this request using "CASE" expression: (CASE @isArrivalTime WHEN 1 THEN ArrivalTime ELSE PickedupTime END) >= DATEADD(mi, -@TZOffsetInMins, @sTime) AND (CASE @isArrivalTime WHEN 1 THEN ArrivalTime ELSE PickedupTime END) < DATEADD(mi, -@TZOffsetInMins, @fTime) If @isArrivalTime = 1 then chose ArrivalTime column else chose PickedupTime column. I have a clustered index on ArrivalTime and nonclustered index on PickedupTime. I've noticed that when I'm using this query (with @isArrivalTime = 1), my performance is a lot worse comparing to only using ArrivalTime. Maybe the query optimizer can't use\choose the index properly in this way? I compared the execution plans an noticed that when I'm using the CASE 32% of the time is being wasted on the index scan, but when I didn't use the CASE(just usedArrivalTime`) only 3% were wasted on this index scan. Anyone know the reason for this?

    Read the article

  • How can I map "insert='false' update='false'" on a composite-id key-property which is also used in a one-to-many FK?

    - by Gweebz
    I am working on a legacy code base with an existing DB schema. The existing code uses SQL and PL/SQL to execute queries on the DB. We have been tasked with making a small part of the project database-engine agnostic (at first, change everything eventually). We have chosen to use Hibernate 3.3.2.GA and "*.hbm.xml" mapping files (as opposed to annotations). Unfortunately, it is not feasible to change the existing schema because we cannot regress any legacy features. The problem I am encountering is when I am trying to map a uni-directional, one-to-many relationship where the FK is also part of a composite PK. Here are the classes and mapping file... CompanyEntity.java public class CompanyEntity { private Integer id; private Set<CompanyNameEntity> names; ... } CompanyNameEntity.java public class CompanyNameEntity implements Serializable { private Integer id; private String languageId; private String name; ... } CompanyNameEntity.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.jboss.org/dtd/hibernate/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="com.example"> <class name="com.example.CompanyEntity" table="COMPANY"> <id name="id" column="COMPANY_ID"/> <set name="names" table="COMPANY_NAME" cascade="all-delete-orphan" fetch="join" batch-size="1" lazy="false"> <key column="COMPANY_ID"/> <one-to-many entity-name="vendorName"/> </set> </class> <class entity-name="companyName" name="com.example.CompanyNameEntity" table="COMPANY_NAME"> <composite-id> <key-property name="id" column="COMPANY_ID"/> <key-property name="languageId" column="LANGUAGE_ID"/> </composite-id> <property name="name" column="NAME" length="255"/> </class> </hibernate-mapping> This code works just fine for SELECT and INSERT of a Company with names. I encountered a problem when I tried to update and existing record. I received a BatchUpdateException and after looking through the SQL logs I saw Hibernate was trying to do something stupid... update COMPANY_NAME set COMPANY_ID=null where COMPANY_ID=? Hibernate was trying to dis-associate child records before updating them. The problem is that this field is part of the PK and not-nullable. I found the quick solution to make Hibernate not do this is to add "not-null='true'" to the "key" element in the parent mapping. SO now may mapping looks like this... CompanyNameEntity.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.jboss.org/dtd/hibernate/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="com.example"> <class name="com.example.CompanyEntity" table="COMPANY"> <id name="id" column="COMPANY_ID"/> <set name="names" table="COMPANY_NAME" cascade="all-delete-orphan" fetch="join" batch-size="1" lazy="false"> <key column="COMPANY_ID" not-null="true"/> <one-to-many entity-name="vendorName"/> </set> </class> <class entity-name="companyName" name="com.example.CompanyNameEntity" table="COMPANY_NAME"> <composite-id> <key-property name="id" column="COMPANY_ID"/> <key-property name="languageId" column="LANGUAGE_ID"/> </composite-id> <property name="name" column="NAME" length="255"/> </class> </hibernate-mapping> This mapping gives the exception... org.hibernate.MappingException: Repeated column in mapping for entity: companyName column: COMPANY_ID (should be mapped with insert="false" update="false") My problem now is that I have tryed to add these attributes to the key-property element but that is not supported by the DTD. I have also tryed changing it to a key-many-to-one element but that didn't work either. So... How can I map "insert='false' update='false'" on a composite-id key-property which is also used in a one-to-many FK?

    Read the article

  • Doctrine inserts when it should update

    - by Goran Juric
    I am trying to use do the most simple update query but Doctrine issues an INSERT statement instead of an UPDATE. $q = Doctrine_Query::create() ->from('Image i') ->where('id = ?'); $image = $q->fetchOne($articleId, Doctrine_Core::HYDRATE_RECORD); $image->copyright = "some text"; $image->save(); I have also tried using the example from the manual, but still a new record gets inserted: $userTable = Doctrine_Core::getTable('User'); $user = $userTable->find(2); if ($user !== false) { $user->username = 'Jack Daniels'; $user->save(); } edit: This example from the manual works: $user = new User(); $user->assignIdentifier(1); $user->username = 'jwage'; $user->save(); The funny thing is that I use this on another model and there it works OK. Maybe I have to fetch the whole array graph for this to work (I have another model in a one to many relationship)?

    Read the article

  • Decoding and caching json every 60 minutes

    - by Gary
    Hi, How do I do this on a php webpage? I want to get and decode a json string and display the results as html on my page, however, I don't want it hotlinking back to the source. If I could write the decoded string to a txt file say weather.txt on the server and keep the html formatting and do it so that the page won't fetch the json script until 60 minutes has passed since the last time it was fetched regardless of how many times the page is opened during that 60 minute period and the weather.txt is viewed. All I can come up with is a simple script that hotlinks, everything else I have tried simply failed. $file = file_get_contents('http://sample.com/weather'); $out = (json_decode($file)); echo $out-mainText; Will appreciate any help with this.

    Read the article

  • SQL Query: Using Cursors

    - by user2953138
    I need some directions for SQL Server & Cursors: I have a table named Order: OrderID Item Amount 1 A 10 1 B 1 2 A 5 2 C 4 2 D 21 3 B 11 I have a second table named Storage: Item Amount A 40 B 44 C 20 D 1 For every OrderID, I want to check if enough items are available. If not, I want to return an error message. How can this be done with Cursors at all? Are nested cursors the solution to this? My main issue is to understand how I can fetch the OrderID as actual "Group" of ID=1, 2, 3 etc. instead of line by line

    Read the article

  • How to build unlimited level of menu through PHP and mysql

    - by Starx
    Well, to build my menu my menu I use a db similar structure like this To assign another submenu for existing submenu I simply assign its parent's id as its value of parent field. parent 0 means top menu now there is not problem while creating submenu inside another submenu now this is way I fetch the submenu for the top menu <ul class="topmenu"> <? $list = $obj -> childmenu($parentid); //this list contains the array of submenu under $parendid foreach($list as $menu) { extract($menu); echo '<li><a href="#">'.$name.'</a></li>'; } ?> </ul> What I want to do is. I want to check if a new menu has other child menu and I want to keep on checking until it searches every child menu that is available and I want to display its child menu inside its particular list item like this Home ........

    Read the article

  • How do I return the rows from an Oracle Stored Procedure using SELECT?

    - by Calanus
    I have a stored procedure which returns a ref cursor as follows: CREATE OR REPLACE PROCEDURE AIRS.GET_LAB_REPORT (ReportCurTyp OUT sys_refcursor) AS v_report_cursor sys_refcursor; report_record v_lab_report%ROWTYPE; l_sql VARCHAR2 (2000); BEGIN l_sql := 'SELECT * FROM V_LAB_REPORT'; OPEN v_report_cursor FOR l_sql; LOOP FETCH v_report_cursor INTO report_record; EXIT WHEN v_report_cursor%NOTFOUND; END LOOP; CLOSE v_report_cursor; END; I want to use the output from this stored procedure in another select statement like: SELECT * FROM GET_LAB_REPORT() but I can't seem to get my head around the syntax. Any ideas?

    Read the article

  • WPF webbrowser - get HTML downloaded?

    - by Mathias Lykkegaard Lorenzen
    I'm listening to the WPF webbrowser's LoadCompleted event. It has some navigation arguments which provide details regarding the navigation. However, e.Content is always null. Am I paying attention to the wrong event here? How can I fetch the HTML that was just downloaded as string? I tried some things which I would consider hacks, but they return a string of HTML, even though that was not the string downloaded. For instance, with that method when I go to a page which just sends me the string abc, I get the result <document><body>abc</body></document> or something similar. I would prefer not getting into any more hacks than nescessary to get this running.

    Read the article

  • @OneToOne and @JoinColumn, auto delete null entity , doable?

    - by smallufo
    I have two Entities , with the following JPA annotations : @Entity @Table(name = "Owner") public class Owner implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id") private long id; @OneToOne(fetch=FetchType.EAGER , cascade=CascadeType.ALL) @JoinColumn(name="Data_id") private Data Data; } @Entity @Table(name = "Data") public class Data implements Serializable { @Id private long id; } Owner and Data has one-to-one mapping , the owning side is Owner. The problem occurs when I execute : owner.setData(null) ; ownerDao.update(owner) ; The "Owner" table's Data_id becomes null , that's correct. But the "Data" row is not deleted automatically. I have to write another DataDao , and another service layer to wrap the two actions ( ownerDao.update(owner) ; dataDao.delete(data); ) Is it possible to make a data row automatically deleted when the owning Owner set it to null ?

    Read the article

  • why is there extra using where in execution plan of query

    - by user366534
    I see plan of query: EXPLAIN SELECT * FROM `subscribers` WHERE state =4 AND date_added < '2010-12-23 11:47:45' It shows: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE subscribers range state_date_added state_date_added 9 NULL 8 Using where Here is indexes of table: Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment subscribers 0 PRIMARY 1 subscriber_id A 382039 NULL NULL BTREE subscribers 0 email_list_id 1 email_address A 191019 NULL NULL BTREE subscribers 0 email_list_id 2 list_id A 382039 NULL NULL BTREE subscribers 1 FK_list_id 1 list_id A 10 NULL NULL BTREE subscribers 1 state_date_added 1 state A 12 NULL NULL BTREE subscribers 1 state_date_added 2 date_added A 8128 NULL NULL BTREE The last two lines describes index what is supposed for the query. Why is there in extra column using where? Even If I fetch only state and date_added column, it has in extra column: Using where; Using index. I understand why it has using index, but I don't understand Using where here.

    Read the article

  • Renaming files: Visual Studio vs Version control

    - by Benjol
    The problem with renaming files is that if you want to take advantage of Visual Studio refactoring, you really need to do it from inside Visual Studio. But most (not all*) version control system also want to be the ones doing the renaming. One solution is to use integrated source control, but this is not always available, and in some cases is pretty clunky. I'd personally be more comfortable using source control separately, outside of Visual Studio, but I'm not sure how to manage this question of file renames. So, for those of you that use Visual Studio, which source control do you use? Do you use a VS integration (which one?) and otherwise, how do you resolve this renaming problem? (* git is smart enough to work it out for itself)

    Read the article

  • Upload Photo To Album

    - by st4ck0v3rfl0w
    Hello All, I'm trying to familiarize myself with Facebook's new Graph API and so far I can fetch and write some data pretty easily. Something I'm struggling to find decent documentation on is uploading images to an album. According to http://developers.facebook.com/docs/api#publishing you need to supply the message argument. But I'm not quite sure how to construct it. Older resources I've read are: http://wiki.auzigog.com/Facebook_Photo_Uploads http://wiki.developers.facebook.com/index.php/Photos.upload If someone has more information or could help me tackle uploading photos to an album using Facebook Graph API please reply!

    Read the article

  • How to eval javascript code with iPhone SDK?

    - by overboming
    I need to fetch some result on a webpage, which use some javascript code to generate the part I am interesting in like following eval(function(p,a,c,k,e,d){e=function(c){return c};if(!''.replace(/^/,String)){while(c--)d[c]=k[c]||c;k=[function(e){return d[e]}];e=function(){return'\\w+'};c=1;};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p;}('5 11=17;5 12=["/3/2/1/0/13.4","/3/2/1/0/15.4","/3/2/1/0/14.4","/3/2/1/0/7.4","/3/2/1/0/6.4","/3/2/1/0/8.4","/3/2/1/0/10.4","/3/2/1/0/9.4","/3/2/1/0/23.4","/3/2/1/0/22.4","/3/2/1/0/24.4","/3/2/1/0/26.4","/3/2/1/0/25.4","/3/2/1/0/18.4","/3/2/1/0/16.4","/3/2/1/0/19.4","/3/2/1/0/21.4"];5 20=0;',10,27,'40769|54|Images|Files|png|var|imanhua_005_140430179|imanhua_004_140430179|imanhua_006_140430226|imanhua_008_140430242|imanhua_007_140430226|len|pic|imanhua_001_140429664|imanhua_003_140430117|imanhua_002_140430070|imanhua_015_140430414||imanhua_014_140430382|imanhua_016_140430414|sid|imanhua_017_140430429|imanhua_010_140430289|imanhua_009_140430242|imanhua_011_140430367|imanhua_013_140430382|imanhua_012_140430367'.split('|'),0,{})) How do I get the evaluation output?

    Read the article

  • GQL: I'm storing JSON in the DataStore. All json is getting converted to html entities, how to avoid

    - by fmsf
    The tittle says most: I'm storing JSON in the DataStore. All json is getting converted to html entities, how can I avoid this? Original I had myJson = db.StringProperty() it complained the json i had was to long and StringProperty had a limit of around 500 chars. Sugesting to use TextProperty instead. It inserted without problems but now myJson looks like this when i fetch it from the database: { &quot;timeUnit&quot;: &quot;14&quot;, &quot;taskCounter&quot;: &quot;0&quot;, &quot;dependencyCounter&quot;: &quot;0&quot;, &quot;tasks&quot;: [], &quot;dependencies&quot;: []} Any sugestions?

    Read the article

  • Breaking the SQL Compact 8K Limit?

    - by David Veeneman
    I am creating a desktop application that stores rich text documents to a SQL Compact database. Documents are converted to a byte array and stored as a Binary column, and I am running into SQL Compact's 8K limit for Binary field length. Is there a simple way to get around the 8K limit? I can come up with lots of complicated ways to do it, such as parsing into 8K chunks for storage and reassembling on fetch. But before I get into something that complex, I would like to make sure I can't solve the problem more simply, such as by changing data type. If there is no simple way of getting around the 8K limit, is thare a best practice for storing documents greater than 8K? Thanks for your help.

    Read the article

  • handling activity destruction in multithreaded android app

    - by Jayesh
    Hi, I have a multithreded app where background threads are used to load data over network or from disk/db. Every once in a while user will perform some action e.g. fetch news over network, which will spawn a background AsyncTask, but for some reason user will quit the app (press back button so that activity gets destroyed). In most such scenarios, I make appropriate checks in the background thread after it returns from n/w i/o, so that it won't crash by accessing members of the activity that is destroyed by now. However some corner cases are left where crashes happen, because the background thread would access some member of activity that is now null. Do other Android developers have some generic/recommended framework to handle such scenarios? These are the times when I wish android would have guaranteed termination of all threads when activity destroys (in the same way that regular linux process cleans up when it's quit)... but I guess Android devs had good reasons for not exposing process lifetimes through the api.

    Read the article

  • Is the a pattern for iterating over lists held by a class (dynamicly typed OO languages)

    - by Roman A. Taycher
    If I have a class that holds one or several lists is it better to allow other classes to fetch those lists(with a getter) or to implement a doXList/eachXList type method for that list that take a function and call that function on each element of the list contained by that object. I wrote a program that did a ton of this and I hated passing around all these lists sometimes with method in class a calling method in class B to return lists contained in class C, B contains a C or multiple C's (note question is about dynamically typed OO languages languages like ruby or smalltalk) ex. (that came up in my program) on a Person class containing scheduling preferences and a scheduler class needing to access them.

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >