Search Results

Search found 61393 results on 2456 pages for 'data integration'.

Page 507/2456 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • How to Integrate Cloud Applications with Oracle SOA Suite

    - by Bruce Tierney
    Having seen a preview of the slides of this upcoming Oracle OpenWorld session on Tuesday Oct 2nd at 11:45 at Moscone West 3003, I'm definitely looking forward to this deep-dive view into cloud integration.  Oracle's Senior Director of Product Management Rajesh Raheja will cover what's involved with cloud integration and provide technical solutions for how to integrate with Oracle cloud applications such as Oracle Fusion, RightNow as well as third party applications like Salesforce. Here is a screenshot from the draft presentation: Also presenting will be a Geeta Pyne, Director of Middleware for BMC to discuss how they have used Oracle SOA Suite for Cloud Integration.

    Read the article

  • Important Security Issue: Is it possible to put binary image data into html markup code and then get

    - by Joern Akkermann
    Hi, it's an important security issue and I'm sure this should be possible. A simple example: You run a community portal. Users are registered and upload their pictures. Your application gives security rules wenever a picture is allowed to be displayed. For example users must be friends on each sides by the system, in order that you can view someone elses uploaded pictures. Here comes the problem: it is possible that someone crawls the image directories of your server. But you want to protect your users from such attacks. If it's possible to put the binary data of an image directly into the html markup, you can restrict the user access of your image dirs the user and group your web application runs of and pass the image data to your apache user and group directly in the html. The only possible weakness then is the password of the user that your web app runs as. Is there already a possibility? Yours, Joern.

    Read the article

  • Best Way to transfer data From one page to another In asp.net.

    - by Anish Karunakaran
    In my scenario am using java script popup that popups another form Which is an address Entry form of merely 20 Controls in it. Now am retrieving data from address page to main page by using a session variable which stores a data table of values. Two different Session variables are this way used for permanent and temporary addresses.. Using session variable degrades the performance i know.. What is the best way to transfer value from one page to another. Regards anish.

    Read the article

  • What database works well with 200+GB of data?

    - by taw
    I've been using mysql (with innodb; on Amazon rds) because it's sort of universal default, but it's been ridiculously under-performing, and tweaking it only delays the inevitable. The data is mostly relatively short (<1kB of bytes each) blobs information about 100Ms of urls. There is (or should be, mysql cannot seem to handle it) very high amount of insert / update / retrieve but few complex queries - not that complex queries wouldn't be useful, but because mysql is so slow that it's far faster to get the data out, process it locally, and cache the results somewhere. I can keep tweaking mysql and throwing more hardware at it, but it seems increasingly futile. So what are the options? SQL/relational model/etc. optional - anything will do as long as it's fast, networked, and language-independent.

    Read the article

  • Is it possible to put binary image data into html markup and then get the image displayed as usual i

    - by Joern Akkermann
    It's an important security issue and I'm sure this should be possible. A simple example: You run a community portal. Users are registered and upload their pictures. Your application gives security rules whenever a picture is allowed to be displayed. For example users must be friends on each sides by the system, in order that you can view someone else's uploaded pictures. Here comes the problem: it is possible that someone crawls the image directories of your server. But you want to protect your users from such attacks. If it's possible to put the binary data of an image directly into the HTML markup, you can restrict the user access of your image dirs the user and group your web application runs of and pass the image data to your Apache user and group directly in the HTML. The only possible weakness then is the password of the user that your web app runs as. Is there already a possibility?

    Read the article

  • PHP PDO: Fetching data as objects - properties assigned BEFORE __construct is called. Is this correc

    - by Erik
    The full question should be "Is this correct or is it some bug I can't count on?" I've been working with PDO more and in particular playing with fetching data directly into objects. While doing so I discovered this: If I fetch data directly into an object like this: $STH = $DBH->prepare('SELECT first_name, address from people WHERE 1'); $obj = $STH->fetchAll(PDO::FETCH_CLASS, 'person'); and have an object like this: class person { public $first_name; public $address; function __construct() { $this->first_name = $this->first_name . " is the name"; } } It shows me that the properties are being assigned before the __construct is being called -- because the names all have " is the name" appended. Is this some bug (in which case I can't/won't count on it) or is this The Way It Should Be. Because it's really quite a nice thing the way it works currently.

    Read the article

  • How to store or share live data between PHP Requests?

    - by Devyn
    Hi, I want to start a project for facebook and the application will be like real-time multiplayer chess game. The problem I'm having is I have no idea how to store the data when a player moves one piece and update the new position in player2 browser. I'm gonna use PHP, MySQL for server side and jQuery for Client Rendering. The simplest idea is to store the data in XML or MySQL and re-generate the result to player2 browser. But I know that when thousand of players are playing, it will not be an efficient way. Since I don't have time to study new language for this project, I'm gonna have to stick with PHP. I'm not going to use flash either because I want my client side light-weight and flash-free. So is there any way that will solve my problems?

    Read the article

  • Is it better to use List or Collection?

    - by Vivin Paliath
    I have an object that stores some data in a list. The implementation could change later, and I don't want to expose the internal implementation to the end user. However, the user must have the ability to modify and access this collection of data. Currently I have something like this: public List<SomeDataType> getData() { return this.data; } public void setData(List<SomeDataType> data) { this.data = data; } Does this mean that I have allowed the internal implementation details to leak out? Should I be doing this instead? public Collection<SomeDataType> getData() { return this.data; } public void setData(Collection<SomeDataType> data) { this.data = new ArrayList<SomeDataType>(data); }

    Read the article

  • How do you data drive task dependencies via properties in psake?

    - by Jordan
    In MSBuild you can data drive target dependencies by passing a item group into a target, like so: <ItemGroup> <FullBuildDependsOn Include="Package;CoreFinalize" Condition="@(FullBuildDependsOn) == ''" /> </ItemGroup> <Target Name="FullBuild" DependsOnTargets="@(FullBuildDependsOn)" /> If you don't override the FullBuildDependsOn item group, the FullBuild target defaults to depending on the Package and CoreFinalize targets. However, you can override this by defining your own FullBuildDependsOn item group. I'd like to do the same in psake - for example: properties { $FullBuildDependsOn = "Package", "CoreFinalize" } task default -depends FullBuild # this won't work because $FullBuildDependsOn hasn't been defined yet - the "Task" function will see this as a null depends array task FullBuild -depends $FullBuildDependsOn What do I need to do to data drive the task dependencies in psake?

    Read the article

  • How to programmatically change a data cell of Ms Access using VB.Net?

    - by manuel
    I have a Microsoft Access database file connected to VB.NET. In the database table, I have a 'No.' and a 'Status' column. Then I have a textbox where I can input an integer. I also have a button, which will change the data cell in 'Status' where 'No.' = textbox.Text(), when I click it. Let's say I want the 'Status' cell to be changed to "Low". What codes should I put in the button_Click event handler? Here is the codes I used to retrieve and display the database table. Public Class theForm Dim con As New OleDb.OleDbConnection Dim ds As New DataSet Dim daTitles As OleDb.OleDbDataAdapter Private Sub theForm_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load con.ConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0; Data Source=|DataDirectory|\db1.mdb" con.Open() daTitles = New OleDb.OleDbDataAdapter("SELECT * FROM Titles", con) daTitles.Fill(ds, "Titles") DataGridView1.DataSource = ds.Tables("Titles") DataGridView1.AutoResizeColumns() End Sub

    Read the article

  • Can a .csv file be used as a data source in Visual Studio 2008?

    - by Kevin
    I'm pretty new to C# and Visual Studio. I'm writing a small program that will read a .csv file and then write the records read to a SQL Server database table. I can manually parse the .csv file, but I was wondering if it is possible to somehow "describe" the .csv file to Visual Studio so that I can use it as a data source? I should mention that the first two lines in the .csv file contain header information and the following lines are the actual comma-delimited data. Also, I should mention that this program is a stand-alone console program with no user interface.

    Read the article

  • Whats the most efficient MySQL column types for this data?

    - by AlabamaKush
    I have several tables with some pretty standard data in each. Can somebody help me optimize them by telling me the best column types for this data. Whats beside them is what I have currently. Number (max length 7) --> MEDIUMINT(8) Unsigned Text (max length 30) --> VARCHAR(30) Text (max length 200) --> VARCHAR(200) Number (max length 4) --> SMALLINT(5) Unsigned Number (either 0 or 1) --> TINYINT(1) Unsigned Text (max length 500) --> TEXT Any suggestions? I'm just guessing with this so I know some of them are wrong...

    Read the article

  • How to embed an authorize.net payment gateway form into a single page website with one item for sale?

    - by Adam S
    My website sells one item. I am currently using the simple checkout button embedded on the website. Rather than having the button I would like the order form to be on the single page with a field for quantity. At first I imagined that there would be a simple form that I could embed however it looks like that I need a full integration into my website through the Advanced Integration Method (AIM) which is much more complicated then I wanted. I don't want integration into my website, can I do it without, and if I have to what is the cleanest and simplest way to do it?

    Read the article

  • how to show all method and data when the object not has "__iter__" function in python..

    - by zjm1126
    i find a way : (1):the dir(object) is : a="['__class__', '__contains__', '__delattr__', '__delitem__', '__dict__', '__doc__', '__getattribute__', '__getitem__', '__hash__', '__init__', '__iter__', '__metaclass__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__str__', '__weakref__', '_errors', '_fields', '_prefix', '_unbound_fields', 'confirm', 'data', 'email', 'errors', 'password', 'populate_obj', 'process', 'username', 'validate']" (2): b=eval(a) (3)and it became a list of all method : ['__class__', '__contains__', '__delattr__', '__delitem__', '__dict__', '__doc__', '__getattribute__', '__getitem__', '__hash__', '__init__', '__iter__', '__metaclass__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__str__', '__weakref__', '_errors', '_fields', '_prefix', '_unbound_fields', 'confirm', 'data', 'email', 'errors', 'password', 'populate_obj', 'process', 'username', 'validate'] (3)then show the object's method,and all code is : s='' a=eval(str(dir(object))) for i in a: s+=str(i)+':'+str(object[i]) print s but it show error : KeyError: '__class__' so how to make my code running . thanks

    Read the article

  • How can I calculate data for a boxplot (quartiles, median) in a Rails app on Heroku? (Heroku uses Po

    - by hadees
    I'm trying to calculate the data needed to generate a box plot which means I need to figure out the 1st and 3rd Quartiles along with the median. I have found some solutions for doing it in Postgresql however they seem to depend on either PL/Python or PL/R which it seems like Heroku does not have either enabled for their postgresql databases. In fact I ran "select lanname from pg_language;" and only got back "internal". I also found some code to do it in pure ruby but that seems somewhat inefficient to me. I'm rather new to Box Plots, Postgresql, and Ruby on Rails so I'm open to suggestions on how I should handle this. There is a possibility to have a lot of data which is why I'm concerned with performance however if the solution ends up being too complex I may just do it in ruby and if my application gets big enough to warrant it get my own Postgresql I can host somewhere else. *note: since I was only able to post one link, cause I'm new, I decided to share a pastie with some relevant information

    Read the article

  • SQL query to return data from two separate rows in a table joined to a master table

    - by Ali
    I have a TWO tables of data with following fields table1=(ITTAG,ITCODE,ITDESC,SUPcode) table2=(ACCODE,ACNAME,ROUTE,SALMAN) This is my customer master table that contains my customer data such as customer code, customer name and so on... Every Route has a supervisor (table1=supcode) and I need to know the supervisor name in my table which both supervisor name and code exist in one table. table1 has contain all names separated by ITTAG. For example, supervisor's name has ITTAG='K'; also salesman's name has ITTAG='S'. ITTAG ITCODE ITDESC SUPCODE ------ ------ ------ ------- S JT JOHN TOMAS TF K WK VIKI KOO NULL Now this is the result which I want ACCODE ACNAME ROUTE SALEMANNAME SUPERVISORNAME ------- ------ ------ ------------ --------------- IMC1010 ABC HOTEL 01 JOHN TOMAS VIKI KOO I hope this this information is sufficient to get the query..

    Read the article

  • Web Services Example - Part 2: Programmatic

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 2 of our Web Service examples.  In this posting we'll take a look at using a SOAP Web Service but calling it programmatically in code and parsing the return into a bean. Getting the sample code: Just click here to download a zip of the entire project.  You can unzip it and load it into JDeveloper and deploy it either to iOS or Android.  Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed.  Note: This is a different workspace than WS-Part1 Defining our Web Service: Just like our first installment, we are using the same public weather forecast web service provided free by CDYNE Corporation.  Sometimes this service goes down so please ensure you know it's up before reporting this example isn't working. We're going to concentrate on the same two web service methods, GetCityForecastByZIP and GetWeatherInformation. Defing the Application: The application setup is identical to the Weather1 version.  There are some improvements to the data that is displayed as part of this example though.  Now we are able to show the associated image along with each forecast line when using the Forecast By Zip feature.  We've also added the temperature Hi/Low values into the UI. Summary of Fundamental Changes In This Application The most fundamental change is that we're binding the UI to the Bean Data Controls instead of directly to the Web Service Data Controls.  This gives us much more flexibility to control the shape of the data and allows us to do caching of the data outside of the Web Service.  This way if your application is, say offline, your bean could still populate with data from a local cache and still show you some UI as opposed to completely failing because you don't have any connectivity. In general we promote this type of programming technique with ADF Mobile to insulate your application from any issues with network connectivity. What's different with this example? We have setup the Web Service DC the same way but now we have managed beans to process the data.  The following classes define the "Model" of our application:  CityInformation-CityForecast-Forecast, WeatherInformation-WeatherDescription.  We use WeatherBean for UI interaction to the model layer.  If you look through this example, we don't really do that much with the java code except use it to grab the image URL from the weather description.  In a more realistic example, you might be using some JDBC classes to persist the data to a local database. To have a good architecture it is always good to keep your model and UI layers separate.  This gets muddied if you start to use bindings on a page invoked from Java code and this java code starts to become your "model" layer.  Since bindings are page specific, your model layer starts to become entwined with your UI.  Not good!  To help with this, we've added some utility functions that let you invoke DC methods without having a binding and thus execute methods from your "model" layer without requiring a binding in your page definition.  We do this with the invokeDataControlMethod of the AdfmfJavaUtilities class.  An example of this method call is available in line 95 of WeatherInformation.java and line 93 of CityInformation.Java. What's a GenericType? Because Web Service Data Controls (and also URL Data Controls AKA REST) use generic name/value pairs to define their structure and don't have strongly typed objects, these are actually stored internally as GenericType objects.  The GenericType class is simply a property map of name/value pairs that can be hierarchical.  There are methods like getAttribute where you supply the index of the attribute or it's string property name.  Why is this important to know?  Because invokeDataControlMethod returns GenericType objects and developers either need to parse these GenericType objects themselves or use one of our helper functions. GenericTypeBeanSerializationHelper This class does exactly what it's name implies.  It's a helper class for developers to aid in serialization of GenericTypes to/from java objects.  This is extremely handy if you have a large GenericType object with many attributes (or you're just lazy like me!) and you just want to parse it out into a real java object you can use more easily.  Here you would use the fromGenericType method.  This method takes the class of the Java object you wish to return and the GenericType as parameters.  The method then parses through each attribute in the GenericType and uses reflection to set that same attribute in the Java class.  Then the method returns that new object of the class you specified.  This is obviously very handy to avoid a lot of shuffling code between GenericType and your own Java classes.  The reverse method, toGenericType is also available when you want to go the other way.  In this case you supply the string that represents the package location in the DataControl definition (Example: "MyDC.myParams.MyCollection") and then pass in the Java object you have that holds the data and a GenericType is returned to you.  Again, it will use reflection to calculate the attributes that match between the java class and the GenericType and call the getters/setters on those. Issues and Possible Improvements: In the next installment we'll show you how to make your web service calls asynchronously so your UI will fill dynamically when the service call returns but in the meantime you show the data you have locally in your bean fed from some local cache.  This gives your users instant delivery of some data while you fetch other data in the background.

    Read the article

  • Right-Time Retail Part 2

    - by David Dorf
    This is part two of the three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Integration Of course these real-time enabling technologies are only as good as the systems that utilize them, and it only takes one bottleneck to slow everyone else down. What good is an immediate stock-out notification if the supply chain can’t react until tomorrow? Since being formed in 2006, Oracle Retail has been not only adding more integrations between systems, but also modernizing integrations for appropriate speed. Notice I tossed in the word “appropriate.” Not everything needs to be real-time – again, we’re talking about Right-Time Retail. The speed of data capture, analysis, and execution must be synchronized or you’re wasting effort. Unfortunately, there isn’t an enterprise-wide dial that you can crank-up for your estate. You’ll need to improve things piecemeal, with people and processes as limiting factors while choosing the appropriate types of integrations. There are three integration styles we see in the retail industry. First is batch. I know, the word “batch” just sounds slow, but this pattern is less about velocity and more about volume. When there are large amounts of data to be moved, you’ll want to use batch processes. Our technology of choice here is Oracle Data Integrator (ODI), which provides a fast version of Extract-Transform-Load (ETL). Instead of the three-step process, the load and transform steps are combined to save time. ODI is a key technology for moving data into Retail Analytics where we can apply science. Performing analytics on each sale as it occurs doesn’t make any sense, so we batch up a statistically significant amount and submit all at once. The second style is fire-and-forget. For some types of data, we want the data to arrive ASAP but immediacy is not necessary. Speed is less important than guaranteed delivery, so we use message-oriented middleware available in both Weblogic and the Oracle database. For example, Point-of-Service transactions are queued for delivery to Central Office at corporate. If the network is offline, those transactions remain in the queue and will be delivered when the network returns. Transactions cannot be lost and they must be delivered in order. (Ever tried processing a return before the sale?) To enhance the standard queues, we offer the Retail Integration Bus (RIB) to help the management and monitoring of fire-and-forget messaging in the enterprise. The third style is request-response and is most commonly implemented as Web services. This is a synchronous message where the sender waits for a response. In this situation, the volume of data is small, guaranteed delivery is not necessary, but speed is very important. Examples include the website checking inventory, a price lookup, or processing a credit card authorization. The Oracle Service Bus (OSB) typically handles the routing of such messages, and we’ve enhanced its abilities with the Retail Service Backbone (RSB). To better understand these integration patterns and where they apply within the retail enterprise, we’re providing the Retail Reference Library (RRL) at no charge to Oracle Retail customers. The library is composed of a large number of industry business processes, including those necessary to support Commerce Anywhere, as well as detailed architectural diagrams. These diagrams allow implementers to understand the systems involved in integrations and the specific data payloads. Furthermore, with our upcoming release we’ll be providing a new tool called the Retail Integration Console (RIC) that allows IT to monitor and manage integrations from a single point. Using RIC, retailers can quickly discern where integration activity is occurring, volume statistics, average response times, and errors. The dashboards provide the ability to dive down into the architecture documentation to gather information all the way down to the specific payload. Retailers that want real-time integrations will also need real-time monitoring of those integrations to ensure service-level agreements are maintained. Part 3 looks at marketing.

    Read the article

  • XSS as attack vector even if XSS data not stored?

    - by Klaas van Schelven
    I have a question about XSS Can forms be used as a vector for XSS even if the data is not stored in the database and used at a later point? i.e. in php the code would be this: <form input="text" value="<?= @$_POST['my_field'] ?>" name='my_field'> Showing an alert box (demonstrate that JS can be run) on your own browser is trivial with the code above. But is this exploitable across browsers as well? The only scenario I see is where you trick someone into visiting a certain page, i.e. a combination of CSRF and XSS. "Stored in a database and used at a later point": the scenario I understand about CSS is where you're able to post data to a site that runs JavaScript and is shown on a page in a browser that has greater/different privileges than your own. But, to be clear, this is not wat I'm talking about above.

    Read the article

  • How can I draw a log-normalized imshow plot with a colorbar representing the raw data in matplotlib

    - by Adam Fraser
    I'm using matplotlib to plot log-normalized images but I would like the original raw image data to be represented in the colorbar rather than the [0-1] interval. I get the feeling there's a more matplotlib'y way of doing this by using some sort of normalization object and not transforming the data beforehand... in any case, there could be negative values in the raw image. import matplotlib.pyplot as plt import numpy as np def log_transform(im): '''returns log(image) scaled to the interval [0,1]''' try: (min, max) = (im[im > 0].min(), im.max()) if (max > min) and (max > 0): return (np.log(im.clip(min, max)) - np.log(min)) / (np.log(max) - np.log(min)) except: pass return im a = np.ones((100,100)) for i in range(100): a[i] = i f = plt.figure() ax = f.add_subplot(111) res = ax.imshow(log_transform(a)) # the colorbar drawn shows [0-1], but I want to see [0-99] cb = f.colorbar(res) I've tried using cb.set_array, but that didn't appear to do anything, and cb.set_clim, but that rescales the colors completely. Thanks in advance for any help :)

    Read the article

  • Local Data Cache - How do I refresh the local db when I add fields to remote db?

    - by Chu
    I'm using a Local Data Cache in an ASP.NET 3.5 environment. I made a change in my main database by adding a new field. I double click on my .SYNC file in my project to startup the Local Data Cache wizard again. The wizard starts and I click OK with the hopes that it'll re-query my database and add the new field to the local database file. Instead, I get an error saying "Synchronizing the databae failed with the message: Unable to enumerate changes at the DbServerSyncProvider..." The only way I know to get things working again is to delete the .SYNC file along with the local database and start it from scratch. There's got to be an easier way... anyone know it?

    Read the article

  • Pivot table not refreshing sort order

    - by William Anthony
    I have Pivot Table that get its data source from another sheet, same workbook. I want the sort order of data is same as the data source order, I choose "Sort in data source order" in Pivot Table option. The problem is, when I change the data order on data source worksheet, then I refresh the Pivot Table, the sort order didn't change. I googled that the Pivot Table should be unlink first then re-link again in order to work properly, so I tried the following: The original data source has named range: origdata. The fake data source has names range: dummydata I changed manually data source to dummydata then changed back to origdata. The sort order did change as expected. Now I want to make the operation automated, so I'm using this code in Worksheet.activate event. Note that, PT is PivotTable instance. ... PT.SourceData = "dummydata" PT.RefreshTable PT.SourceData = "origdata" PT.RefreshTable ... Change data source from VBA didn't change the sort order just like I did with manual method. Why is that? Am I missing something? Maybe there are some routine called when I changed the data source manually via toolbar button? Thanks in advance for your help.

    Read the article

  • Does Ruby on Rails "has_many" array provide data on a "need to know" basis?

    - by Jian Lin
    On Ruby on Rails, say, if the Actor model object is Tom Hanks, and the "has_many" fans is 20,000 Fan objects, then actor.fans gives an Array with 20,000 elements. Probably, the elements are not pre-populated with values? Otherwise, getting each Actor object from the DB can be extremely time consuming. So it is on a "need to know" basis? So does it pull data when I access actor.fans[500], and pull data when I access actor.fans[0]? If it jumps from each record to record, then it won't be able to optimize performance by doing sequential read, which can be faster on the hard disk because those records could be in the nearby sector / platter layer -- for example, if the program touches 2 random elements, then it will be faster just to read those 2 records, but what if it touches all elements in random order, then it may be faster just to read all records in a sequential way, and then process the random elements. But how will RoR know whether I am doing only a few random elements or all elements in random?

    Read the article

  • How can I use gnuplot to plot data that is in an un-friendly format?

    - by Zack
    I need to graph some data that is not exactly in the most friendly format, most examples or usage puts things in a nice column/table that is very easy to parse and graph. However I have the following format (and am a bit stuck as to how to tackle this): DATE1 label_1 xx yy DATE1 label_2 xx yy DATE1 label_3 xx yy DATE2 label_2 xx yy DATE2 label_3 xx yy DATE3 label_1 xx yy DATE3 label_2 xx yy DATE3 label_3 xx yy DATE4 label_2 xx yy DATE4 label_3 xx yy ...continues *I've added the extra space between the dates for readability. **Note: under DATE2,DATE4 label_1 is missing, i.e. the data file may have labels that come and go and should represent a discontinuity in the graph. I'd like to have the X-axis use the DATEX for the labels, and then create two lines for each label (xx and yy respectively). Does anyone have any suggestions on the best way to tackle this problem?

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >