Search Results

Search found 71354 results on 2855 pages for 'data definition language'.

Page 1746/2855 | < Previous Page | 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753  | Next Page >

  • oracle plsql select pivot without dynamic sql to group by

    - by kayhan yüksel
    To whom it may respond to, We would like to use SELECT function with PIVOT option at a 11g r2 Oracle DBMS. Our query is like : "select * from (SELECT o.ship_to_customer_no, ol.item_no,ol.amount FROM t_order o, t_order_line ol WHERE o.NO = ol.order_no and ol.item_no in (select distinct(item_no) from t_order_line)) pivot --xml ( SUM(amount) FOR item_no IN ( select distinct(item_no) as item_no_ from t_order_line));" As can be seen, XML is commented out, if run as PIVOT XML it gives the correct output in XML format, but we are required to get the data as unformatted pivot data, but this sentence throws error : ORA-00936: missing expression Any resolutions or ideas would be welcomed, Best Regards -------------if we can get the result of this to sys_refcursor using execute immediate it will be solved ------------------------ the procedure : PROCEDURE pr_test2 (deneme OUT sys_refcursor) IS v_sql NVARCHAR2 (4000) := ''; TYPE v_items IS TABLE OF NVARCHAR2 (30); v_pivot_items NVARCHAR2 (4000) := ''; BEGIN FOR i IN (SELECT DISTINCT (item_no) AS items FROM t_order_line) LOOP v_pivot_items := ',''' || i.items || '''' || v_pivot_items; END LOOP; v_pivot_items := LTRIM (v_pivot_items, ','); v_sql := 'begin select * from (SELECT o.ship_to_customer_no, ol.item_no,ol.amount FROM t_order o, t_order_line ol WHERE o.NO = ol.order_no and OL.ITEM_NO in (select distinct(item_no) from t_order_line)) pivot --xml ( SUM(amount) FOR item_no IN (' || v_pivot_items || '));end;'; open DENEME for select v_sql from dual; Kayhan YÜKSEL

    Read the article

  • multi-core processing in R on windows XP - via doMC and foreach

    - by Jan
    Hi guys, I'm posting this question to ask for advice on how to optimize the use of multiple processors from R on a Windows XP machine. At the moment I'm creating 4 scripts (each script with e.g. for (i in 1:100) and (i in 101:200), etc) which I run in 4 different R sessions at the same time. This seems to use all the available cpu. I however would like to do this a bit more efficient. One solution could be to use the "doMC" and the "foreach" package but this is not possible in R on a Windows machine. e.g. library("foreach") library("strucchange") library("doMC") # would this be possible on a windows machine? registerDoMC(2) # for a computer with two cores (processors) ## Nile data with one breakpoint: the annual flows drop in 1898 ## because the first Ashwan dam was built data("Nile") plot(Nile) ## F statistics indicate one breakpoint fs.nile <- Fstats(Nile ~ 1) plot(fs.nile) breakpoints(fs.nile) # , hpc = "foreach" --> It would be great to test this. lines(breakpoints(fs.nile)) Any solutions or advice? Thanks, Jan

    Read the article

  • How to open the download window when a dynamically created link is clicked in asp.net

    - by Ranjana
    i have stored the txtfile in the database.i need to show the txtfile when i clik the link. and this link has to be created dynamically. my code below: aspx code: aspx.cs protected void Page_Load(object sender, EventArgs e) { if(!Page.IsPostBack) { DataTable dtassignment = new DataTable(); dtassignment = serviceobj.DisplayAssignment(Session["staffname"].ToString()); if (dtassignment != null) { Byte[] bytes = (Byte[])dtassignment.Rows[0]["Data"]; //download(dtassignment); } divlink.InnerHtml = ""; divlink.Visible = true; foreach (DataRow r in dtassignment.Rows) { divlink.InnerHtml += "<a href='" + "'onclick='download(dtassignment)'>" + r["Filename"].ToString() + "</a>" + "<br/>"; } } } - public void download(DataTable dtassignment) { System.Diagnostics.Debugger.Break(); Byte[] bytes = (Byte[])dtassignment.Rows[0]["Data"]; Response.Buffer = true; Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = dtassignment.Rows[0]["ContentType"].ToString(); Response.AddHeader("content-disposition", "attachment;filename=" + dtassignment.Rows[0]["FileName"].ToString()); Response.BinaryWrite(bytes); Response.Flush(); Response.End(); } i have got the link dynamically, but i did not able to download the txtfile when i clik the link. how to carry out this. pls help me out...

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • mysql/algorithm: Weighting an average to accentuate differences from the mean

    - by Sai Emrys
    This is for a new feature on http://cssfingerprint.com (see /about for general info). The feature looks up the sites you've visited in a database of site demographics, and tries to guess what your demographic stats are based on that. All my demgraphics are in 0..1 probability format, not ratios or absolute numbers or the like. Essentially, you have a large number of data points that each tend you towards their own demographics. However, just taking the average is poor, because it means that by adding in a lot of generic data, the number goes down. For example, suppose you've visited sites S0..S50. All except S0 are 48% female; S0 is 100% male. If I'm guessing your gender, I want to have a value close to 100%, not just the 49% that a straight average would give. Also, consider that most demographics (i.e. everything other than gender) does not have the average at 50%. For example, the average probability of having kids 0-17 is ~37%. The more a given site's demographics are different from this average (e.g. maybe it's a site for parents, or for child-free people), the more it should count in my guess of your status. What's the best way to calculate this? For extra credit: what's the best way to calculate this, that is also cheap & easy to do in mysql?

    Read the article

  • TextArea being used as an itemEditor misbehaves when the enter key is pressed

    - by ChrisInCambo
    Hi, I have a TextArea inside an itemEditor component, the problem is that when typing in the TextArea if the enter key is pressed the itemEditor resets itself rather moving the caret to the next line as expected: <mx:List width="100%" editable="true" > <mx:dataProvider> <mx:ArrayCollection> <mx:Array> <mx:Object title="Stairway to Heaven" /> </mx:Array> </mx:ArrayCollection> </mx:dataProvider> <mx:itemRenderer> <mx:Component> <mx:Text height="100" text="{data.title}"/> </mx:Component> </mx:itemRenderer> <mx:itemEditor> <mx:Component> <mx:TextArea height="100" text="{data.title}"/> </mx:Component> </mx:itemEditor> </mx:List> </mx:Application> Could anyone advise how I can get around this strange behaviour and make the enter key behave as expected? Thanks, Chris

    Read the article

  • Querying datetime.datetime on appengine acts different then dev server help!

    - by Alon Carmel
    Hey, I'm having some trouble with stuff that work locally and dont work on the app engine python environment: Basically, i want to get a program from an epg between ranges of date and time. i know i cannot do two where < so i saw a suggestion to save the dates as list as datetime.datetime which i did. [datetime.datetime(2010, 5, 10, 14, 25), datetime.datetime(2010, 5, 10, 15, 0)] This is ok. but when i try to compare to it: progranon = get_object(Programs2Channel, 'channel_id =', channelobj.key(), 'endstartdate >', programstart_minex, 'endstartdate <', programstart_minex ) This for some reason works locally, but fails to retrieve the data on the app engine. *Im using Google app engine django patch which uses the get_object to retrieve data in transactions. Please help. Here are more details: this is the LIST: [datetime.datetime(2010, 5, 13, 10, 45), datetime.datetime(2010, 5, 13, 11, 30)] #this is the query: programstart = ""+year+"-"+month+"-"+day+" "+hour+":"+minute programstart_minex = datetime.strptime(programstart, "%Y-%m-%d %H:%M") progranon = Programs2Channel.gql('WHERE channel_id = :channelid AND endstartdate > :programstartx AND endstartdate < :programstartx',channelid = channelobj.key(),programstartx=programstart_minex).get()

    Read the article

  • general learning methodology

    - by momo
    just wanted to hear on the different general learning paths people embark on when learning a new language/framework. the one i currently use, which is how i learned bash and am currently learning python, is: instant hacking tutorial (very short tutorial introducing the basic syntax, variable declaration, loops, data types, etc. and how they are generally used) in depth tutorial with good programming style and slightly topic-specific (e.g. Mark Pilgrim's Dive into Python), important topics for me personally are regex methods, file IO, and ways the different data types are utilized best (i wrote a very primitive bayesian spam filter using python's dictionaries to keep track of word occurrences) spaced-repition of syntax or short recipes (i use anki, with questions like 'create dictionary with filename and filesize metadata, human-readable' or simpler ones like 'match 0 - 3 occurences of the letter M in a string', or 'return/create an iterator from two sequences') the use of spaced-repitition has been invaluable, and i credit it with the ease that i can recall/create python algorithms. however, i've recently started looking into django, and i've found that spaced-repitition, at least in my case, doesn't work very well for learning a framework, it works best with short code recipes (either that or i should start looking into more basic django framework tutorials). the problem i'm encountering is that since framework programming is not only algorithms, but actually learning the API, which can be quite complex since you have to learn all the methods, modules, the places where they are stored, and the sequence of which things have to be done. for ex. in django to start a project that deals with polls (from the django tutorial), one has to create the project, edit the settings.py file, create the polls app, edit the models.py file (which requires knowing the classes that are present in the module models), edit the urls.py file, etc. i found that my spaced-repition method didn't work very well for this type of learning, so i wanted to ask you guys what method(s) you use for learning the different frameworks/APIs.

    Read the article

  • Designing a silverlight dashboard with mef - is it possible? (with dynamic loading of xaps)

    - by Tim Robbin
    Hello! I am just trying to wrap my head around MEF. And as I am really going to love it ( I guess ) I started my first sample project and immediatly stumbled into a big problem and now I am asking myself if I can use MEF for my scenario at all and that is the following: Imagine that one got some kind of dashboard with, let's say, five regions and above each region there are two comboboxes. The values in the first combobox represent different possible views (for example, chartControl, tableControl, pictureControl, ...) and the values of the second combobox represents the different data sources for the currently selected control. As the controls are very big in size one wants to download them as needed. If the user selects one comboboxitem the corresponding control xap should be loaded and displayed in this specific region. If the user selectes another control in the same combobox the control should be removed from the visualtree and the next control should be downloaded and displayed. If the user changes the selection in a different combobox the corresponding control should be loaded again only in this specific region, with perhaps different data. And to make it a little more interesting - as this is some kind of dashboard one can change the layout from five regions to - for example - ten regions. I've seen the video "MVVM with MEF in Silverlight Video Tutorial Part 2: Plugins and Metadata" ( http://csharperimage.jeremylikness.com/2010/03/mvvm-with-mef-in-silverlight-video_09.html ) but he is using an ItemsControl and is working with Visibility and he only got ONE region. So I think that this technique is not working for me... Puh, I hope I could make myself clear! Thanks a lot for any piece of information!!! Greetings, Tim.

    Read the article

  • Parse usable Street Address, City, State, Zip from a string

    - by Rob Allen
    Problem: I have an address field from an Access database which has been converted to Sql Server 2005. This field has everything all in one field. I need to parse out the individual sections of the address into their appropriate fields in a normalized table. I need to do this for approximately 4,000 records and it needs to be repeatable. Here are the rules for this exercise: 1 - no whining about how this should have been separate fields in the first place, we are often confronted with less than ideal situations and have to make the best of them 2- for this post, use any language you want 3- feel free to play code golf 4 - Assume an address in the US (for now) 5 - assume that the input string will sometimes contain an addressee (the person being addressed) and/or a second street address (i.e. Suite B) 6 - states may be abbreviated 7 - zip code could be standard 5 digit or zip+4 8 - there are typos in some instances UPDATE: In response to the questions posed, standards were not universally followed, I need need to store the individual values, not just geocode and errors means typo (corrected above) Sample Data: A. P. Croll & Son 2299 Lewes-Georgetown Hwy, Georgetown, DE 19947 11522 Shawnee Road, Greenwood DE 19950 144 Kings Highway, S.W. Dover, DE 19901 Intergrated Const. Services 2 Penns Way Suite 405 New Castle, DE 19720 Humes Realty 33 Bridle Ridge Court, Lewes, DE 19958 Nichols Excavation 2742 Pulaski Hwy Newark, DE 19711 2284 Bryn Zion Road, Smyrna, DE 19904 VEI Dover Crossroads, LLC 1500 Serpentine Road, Suite 100 Baltimore MD 21 580 North Dupont Highway Dover, DE 19901 P.O. Box 778 Dover, DE 19903

    Read the article

  • substitution cypher with different alphabet length

    - by seanizer
    I would like to implement a simple substitution cypher to mask private ids in URLs I know how my IDs will look like (combination of upperchase ascii, digits and underscore), and they will be rather long, as they are composed keys. I would like to use a longer alphabet to shorten the resulting codes (I'd like to use upper and lower case ascii letters, digits and nothing else). So my incoming alphabet would be [A-Z0-9_] (37 chars) and my outgoing alphabet would be [A-Za-z0-9] (62 chars) so a compression of almost 50% would be available. let's say my URLs look like this: /my/page/GFZHFFFZFZTFZTF_24_F34 and I want them to look like this instead: /my/page/Ft32zfegZFV5 Obviously both arrays would be shuffled to bring some random order in. This does not have to be secure. if someone figures it out: fine, but I don't want the scheme to be obvious. My desired solution would be to convert the string to an integer representation of radix 37, convert the radix to 62 and use the second alphabet to write out that number. is there any sample code available that does something similar? Integer.parseInt ( http://java.sun.com/javase/6/docs/api/java/lang/Integer.html#parseInt%28java.lang.String,%20int%29 ) has some similar logic, but it is hard-coded to use standard digit behavior Any hints? I am using java to implement this but code or pseudo-code in any other language is of course also helpful

    Read the article

  • ASP.NET MVC Authorize by Subdomain

    - by Jimmo
    I have what seems like a common issue with SaaS applications, but have not seen this question on here anywhere. I am using ASP.NET MVC with Forms Authentication. I have implemented a custom membership provider to handle logic, but have one issue (perhaps the issue is in my mental picture of the system). As with many SaaS apps, customers create accounts and use the app in a way that looks like they are the only ones present (they only see their items, users, etc.). In reality, there are generic controllers and views presenting data depending on the customer represented in the URL. When calling something like the MembershipProvider.ValidateUser, I have access to the user's customer affiliation in the User object - what I don't have is the context of the request to compare whether it is a data request for the same customer as the user. As an example, One company called ABC goes to abc.mysite.com Another company called XYZ goes to xyz.mysite.com When an ABC user calls http://abc.mysite.com/product/edit/12 I have an [Authorize] attribute on the Edit method in the ProductController to make sure he is signed in and has sufficient permission to do so. If that same ABC user tried to access http://xyz.mysite.com/product/edit/12 I would not want to validate him in the context of that call. In the ValidateUser of the MembershipProvider, I have the information about the user, but not about the request. I can tell that the user is from ABC, but I cannot tell that the request is for XYZ at that point in the code. How should I resolve this?

    Read the article

  • Web application architecture, and application servers?

    - by seanieb
    Hi, I'm building a web application, and I need to use an architecture that allows me to run it on two servers. The application scrapes information from other sites periodically, and on input from the end user. To do this I'm using Php+curl to scrape the information, Php or python to parse it and store the results in a MySQLDB. Then I will use Python to run some algorithms on the data, this will happen both periodically and on input from the end user. I'm going to cache some of the results in the MySQL DB and sometimes if it is specific to the user, skip storing the data and serve it to the user. I'm think of using Php for the website front end on a separate web server, running the Php spider, MySQL DB and python on another server. As you can see I'm fairly clueless. I'm familiar with using Php, MySQL and the basics of Python, but bringing this all together using something more complex than a cron job is new to me. How do go about implementing this? What frame work(s) should I use? Is MVC a good architecture for this? (I'm new to MVC, architectures etc.) Is Cakephp a good solution? If so will I be able to control and monitor the Python code using it?

    Read the article

  • ACL architechture for a Software As a service in Sprgin 3.0

    - by geoaxis
    I am making a software as a service using Spring 3.0 (Spring MVC, Spring Security, Spring Roo, Hibernate) I have to come up with a flexible access control list mechanism.I have three different kinds of users System (who can do any thing to the system, includes admin and internal daemons) Operations (who can add and delete users, organizations, and do maintenance work on behalf of users and organizations) End Users (they belong to one or more organization, for each organization, the user can have one or more roles, like being organization admin, or organization read-only member) (role like orgadmin can also add users for that organization) Now my question is, how should i model the entity of User? If I just take the End User, it can belong to one or more organizations, so each user can contain a set of references to its organizations. But how do we model the users role for each organization, So for example User UX belongs to organizations og1, og2 and og3, and for og1 he is both orgadmin, and org-read-only-user, where as for og2 he is only orgadmin and for og3 he is only org-read-only-user I have the possibility of making each user belong to one organization alone, but that's making the system bounded and I don't like that idea (although i would still satisfy the requirement) If you have a better extensible ACL architecture, please suggest it. Since its a software as a service, one would expect that alot of different organizations would be part if the same system. I had one concern that it is not a good idea to keep og1 and og2 data on the same DB (if og1 decides to spawn a 100 reports on the system, og2 should not suffer) But that is some thing advanced for now and is not directly related to ACL but to the physical distribution of data and setup of services based on those ACLs This is a community Wiki question, please correct any thing which you wish to do so. Thanks

    Read the article

  • Do complex JOINs cause high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • Ditching Django's models for Ajax/Web Services

    - by Igor Ganapolsky
    Recently I came across a problem at work that made me rethink Django's models. The app I am developing resides on a Linux server. Its a simple model/view/controller app that involves user interaction and updating data in the database. The problem is that this data resides in a MS SQL database on a Windows machine. So in order to use Django's models, I would have to leverage an ODBC driver on linux, and the use a python add-on like pyodbc. Well, let me tell you, setting up a reliable and functional ODBC connection on linux is no easy feat! So much so, that I spent several hours maneuvering this on my CentOS with no luck, and was left with frustration and lots of dumb system errors. In the meantime I have a deadline to meet, and suddenly the very agile and rapid Django application is a roadblock rather than a pleasure to work with. Someone on my team suggested writing this app in .NET. But there are a few problems with that: it won't be deployable on a linux machine, and I won't be able to work on it since I don't know ASP.net. Then a much better suggestion was made: keep the app in django, but instead of using models, do straight up ajax/web services calls in the template. And then it dawned on me - what a great idea. Django's models seem like a nuissance and hindrance in this case, and I can just have someone else write .Net services on their side, that I can call from my template. As a result my app will be leaner and more compact. So, I was wondering if you guys ever came across a similar dillema and what you decided to do about it.

    Read the article

  • Hibernate design to speed up querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT, DOUBLE)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID, DISTANCE 1 1 2 0.383657 2 1 17 0.173201 3 1 63 0.258781 4 1 64 0.013726 5 1 65 0.459829 6 1 95 0.458769 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "STOPID") private int stopID; @Column(name = "ROUTEID", nullable = false) private int routeID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "COORDINATEID", nullable = false) private Coordinate coordinate; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private int CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private int CoordinateID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "FROMCOORDID", nullable = false) private Coordinate fromCoordID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "TOCOORDID", nullable = false) private Coordinate toCoordID; @Column(name = "DISTANCE", nullable = false) private double distance; }

    Read the article

  • How can I get all content within <td> tag using a HTML Agility Pack?

    - by Bob Dylan
    So I'm writing an application that will do a little screen scrapping. I'm using the HTML Agility Pack to load an entire HTML page into an instance of HtmlDocoument called doc. Now I want to parse that doc, looking for this: <table border="0" cellspacing="3"> <tr><td>First rows stuff</td></tr> <tr> <td> The data I want is in here <br /> and it's seperated by these annoying <br /> 's. No id's, classes, or even a single <p> tag. </p> Just a bunch of <br /> tags. </td> </tr> </table> So I just need to get the data within the 2nd row. How can I do this? Should I use a regex or something else?

    Read the article

  • PHP / Zend Framework: Force prepend table name to column name in result array?

    - by Brian Lacy
    I am using Zend_Db_Select currently to retrieve hierarchical data from several joined tables. I need to be able to convert this easily into an array. Short of using a switch statement and listing out all the columns individually in order to sort the data, my thought was that if I could get the table names auto-prepended to the keys in the result array, that would solve my problem. So considering the following (assembled) SQL: SELECT user.*, contact.* FROM user INNER JOIN contact ON contact.user_id = user.user_id I would normally get a result array like this: [username] => 'bob', [contact_id] => 5, [user_id] => 2, [firstname] => 'bob', [lastname] => 'larsen' But instead I want this: [user.user_id] => 2, [user.username] => 'bob', [contact.contact_id] => 5, [contact.firstname] => 'bob', [contact.lastname] => 'larsen' Does anyone have an idea how to achieve this? Thanks!

    Read the article

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • DroDownlist in DataGrid produces Error

    - by S Nash
    I have a DataGrid and everytime I change value of it's embeded dropdwonlist I get: "System.Web.HttpException: The IListSource does not contain any data sources." I have a data grid which loads fine: Name column is editable. I have a dropdownlist these : DataDataGrid gets its value from a table. (Select Name, Address From Persons) Dropdown list also gets list of names from the same table. So DataGrid and dropdownlist are bound to 2 different datasets. Here is my code for dataGrid" <Columns> <ASP:ButtonColumn Text="Delete" CommandName="Delete"></ASP:ButtonColumn> <asp:EditCommandColumn ButtonType="LinkButton" UpdateText="Update" CancelText="Cancel" EditText="Edit"></asp:EditCommandColumn> <ASP:TemplateColumn HeaderText="Name" SortExpression="FY" HeaderStyle-HorizontalAlign="center" HeaderStyle-Wrap="True"> <ItemStyle Wrap="false" HorizontalAlign="left" /> <ItemTemplate> <ASP:Label ID="Name" Text='<%# DataBinder.Eval(Container.DataItem, "Name") %>' runat="server"/> </ItemTemplate> <EditItemTemplate> <ASP:DropDownList id="ddlName" cssClass="DropDownList" runat="server" datasource="<%#allNames%>" DataTextField= "Name" DataValueField="ID" Defaultvalue='<%# DataBinder.Eval(Container.DataItem, "ID") %>' OnPreRender="SetDefaultListItem" accessKey="I" AutoPostBack="true" /> </EditItemTemplate> </ASP:TemplateColumn> <asp:BoundColumn DataField="Address" ReadOnly="True" HeaderText="Address"></asp:BoundColumn> </Columns> Error happens here : dg.DataSource = ds dg.DataBind() Any ideas how to solve this issues? All I want is a DataGrid with a editable column ,which can be edited by choosing one of the value of in a dropdownlist.

    Read the article

  • Safe to pass objects to C functions when working in JNI Invocation API?

    - by bubbadoughball
    I am coding up something using the JNI Invocation API. A C program starts up a JVM and makes calls into it. The JNIenv pointer is global to the C file. I have numerous C functions which need to perform the same operation on a given class of jobject. So I wrote helper functions which take a jobject and process it, returning the needed data (a C data type...for example, an int status value). Is it safe to write C helper functions and pass jobjects to them as arguments? i.e. (a simple example - designed to illustrate the question): int getStatusValue(jobject jStatus) { return (*jenv)->CallIntMethod(jenv,jStatus,statusMethod); } int function1() { int status; jobject aObj = (*jenv)->NewObject (jenv, aDefinedClass, aDefinedCtor); jobject j = (*jenv)->CallObjectMethod (jenv, aObj, aDefinedObjGetMethod) status = getStatusValue(j); (*jenv)->DeleteLocalRef(jenv,aObj); (*jenv)->DeleteLocalRef(jenv,j); return status; } Thanks.

    Read the article

  • how would you like computer science classes to be taught?

    - by aaa
    hello I am a graduate student now, and hopefully someday I will teach. my interests are C++, Python, embedded languages, and scientific computing. Meanwhile I daydream about how I would teach. I was not quite happy with my undergraduate university as I found many computer science classes lacking. so I would like to ask you, if you were a student, how would you like your computer science classes to be taught? I understand it is a very subjective question, but nevertheless I think it's important to know what people want. Some specific points I am interested in: should computer languages be taught explicitly, or should students be required to pick up language on their own? what is better for learning, tests, projects, some sort of take-home exam? how do you think classtime should be used? theory, introduction, explanations, etc.? do you think the group projects are important? how much about computer architecture do you want to learn in computer science class, not necessarily assembler class. should particular operating system/editor be mandated or encouraged? Thanks thank you for your comments. Question has been closed because it is a discussion question rather than Q&A. If you know appropriate website for discussions of such sort with low noise ratio, please let me know.

    Read the article

  • DataGrid calculate difference between values in two databound cells

    - by justMe
    Hello! In my small application I have a DataGrid (see screenshot) that's bound to a list of Measurement objects. A Measurement is just a data container with two properties: Date and CounterGas (float). Each Measurement object represents my gas consumption at a specific date. The list of measurements is bound to the DataGrid as follows: <DataGrid ItemsSource="{Binding Path=Measurements}" AutoGenerateColumns="False"> <DataGrid.Columns> <DataGridTextColumn Header="Date" Binding="{Binding Path=Date, StringFormat={}{0:dd.MM.yyyy}}" /> <DataGridTextColumn Header="Counter Gas" Binding="{Binding Path=ValueGas, StringFormat={}{0:F3}}" /> </DataGrid.Columns> </DataGrid> Well, and now my question :) I'd like to have another column right next to the column "Counter Gas" which shows the difference between the actual counter value and the last counter value. E.g. this additional column should calculate the difference between the value of of Feb. 13th and Feb. 6th = 199.789 - 187.115 = 15.674 What is the best way to achieve this? I'd like to avoid any calculation in the Measurement class which should just hold the data. I'd rather more like the DataGrid to handle the calculation. So is there a way to add another column that just calculates the difference between to values? Maybe using some kind of converter and extreme binding? ;D P.S.: Maybe someone with a better reputation could embed the screenshot. Thanks :)

    Read the article

  • fullCalendar: Implementing a custom event fetcher using event as function

    - by Nick-ACNB
    If I specify a json feed, the events are re-fetched everytime and the fetching time shows the item appearing which causes a delay which is undesirable (at least after initial fetch was done). I am using the latest version from gitHub 1.4.6. The lazyFetching property seems to only work when you change views and you previously fetched for example a month and are drilling down to week/day which is unusable since I only use agendaDay. Here is what I am trying to achieve: $("#calendar").fullCalendar({ events: function(start, end, callback) { $.getJSON( "/GetEvents", { start: start.valueOf() }, function(data, textStatus) { $.each(data, function(i, event) { //Goal here is to only add items that aren't already rendered. if ($("#calendar") .fullCalendar('clientEvents', event.id)=="") $("#calendar").fullCalendar('renderEvent', event, true); }); } ); }); The goal is to use the sticky property in the renderEvent method so that on-screen they aren't re-rendered if they we're previously fetched. I omitted a part where I manually delete those that we're deleted and modify those that we're updated since I am in multi-user settings but you get the point. My issue is that they are fetched once and added. But once I change day and come back, they don't render even if I used sticky... has anyone gotten this error or did I code anything wrong? Also, is this a wrong way to go? I will consider any input. :) Thank you very much.

    Read the article

< Previous Page | 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753  | Next Page >