Search Results

Search found 63386 results on 2536 pages for 'data structure'.

Page 91/2536 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Should I encrypt data in database?

    - by Tio
    I have a client, for which I'm going to do an Web application about patient care, managing patients, consults, history, calendars, everything about that basically. The problem is that this is sensitive data, patient history and such. The client insists on encrypting the data at the database level, but I think this is going to deteriorate the performance of the web app. ( But maybe I shouldn't be worried about this ) I've read the laws about data protection on health issues ( Portugal ), but isn't very specific about this ( I just questioned them about this, I'm waiting for their response ). I've read the following link, but my question is different, should I encrypt the data in the database, or not. One problem that I foresee in encrypting data, is that I'm going to need a key, this could be the user password, but we all know how user passwords are ( 12345 etc etc ), and generating a key I would have to store it somewhere, this means that the programmer, dba, whatever could have access to it, any thoughts on this? Even adding an random salt to the user password isn't going to solve the problem since I can always access it, and therefore decrypt the data.

    Read the article

  • What is the best way to store a table in C++

    - by Topo
    I'm programming a decision tree in C++ using a slightly modified version of the C4.5 algorithm. Each node represents an attribute or a column of your data set and it has a children per possible value of the attribute. My problem is how to store the training data set having in mind that I have to use a subset for each node so I need a quick way to only select a subset of rows and columns. The main goal is to do it in the most memory and time efficient possible (in that order of priority). The best way I have thought of is to have an array of arrays (or std::vector), or something like that, and for each node have a list (array, vector, etc) or something with the column,line(probably a tuple) pairs that are valid for that node. I now there should be a better way to do this, any suggestions? UPDATE: What I need is something like this: In the beginning I have this data: Paris 4 5.0 True New York 7 1.3 True Tokio 2 9.1 False Paris 9 6.8 True Tokio 0 8.4 False But for the second node I just need this data: Paris 4 5.0 New York 7 1.3 Paris 9 6.8 And for the third node: Tokio 2 9.1 Tokio 0 8.4 But with a table of millions of records with up to hundreds of columns. What I have in mind is keep all the data in a matrix, and then for each node keep the info of the current columns and rows. Something like this: Paris 4 5.0 True New York 7 1.3 True Tokio 2 9.1 False Paris 9 6.8 True Tokio 0 8.4 False Node 2: columns = [0,1,2] rows = [0,1,3] Node 3: columns = [0,1,2] rows = [2,4] This way on the worst case scenario I just have to waste size_of(int) * (number_of_columns + number_of_rows) * node That is a lot less than having an independent data matrix for each node.

    Read the article

  • MS Access 2003: Can data disappear from records and how do I test for this and prevent it?

    - by user328960
    Problem and about the database: Data from a record in Access 2003 database has disappeared. This database has 1 backend and 3 frontends, multiple users and is hosted on Citrix. Within this database, we have records of all clients served, ranging in the 1000s. Background info: The form for client data entry is set up with various subforms, including both a "programs enrolled" subform and a "services" subform. A client can be enrolled in multiple programs. Once enrolled in a program, services can be entered for that program area using the services subform. There are multiple fields in the services subform, one of which is a drop-down field allowing you to choose from the programs a client has been enrolled in (the list is updated for that client whenever he is enrolled in a new program). The problem details: For one specific record and one specific program area, the program has disappeared from the "programs enrolled" subform and all of the related services have disappeared from the "services" subform for a period of 3 months of data entry. However, other programs and services for this record did not disappear. Questions: Is the disappearance of data a common Access 2003 problem? Are there tests in place that can be run to see if data is disappearing and catch that data? If so, what are they? If there is specific code involved, what is it? What can be done to prevent the disappearing of data (other than using a different database)?

    Read the article

  • Change Data Capture or Change Tracking - Same as Traditional Audit Trail Table?

    - by HardCode
    Before I delve into the abyss of Microsoft documentation any deeper, I'd like to know if someone experienced with Change Data Capture and Change Tracking know if one or both of these can be used to replace the traditional ... "Audit trail table copy of the 'real table' (all of the fields of the original table, plus date/time, user ID, and DML action field) inserted into by Triggers" ... setup for a database table audit trail, where the trigger populates the audit trail table (which is all manual work). The MSDN overview documentation explains at a high level what Change Data Capture and Change Tracking are, but it isn't clear enough to me, and doesn't state outright, that these tools can be used to replace the traditional audit trail tables we've made so often. Can someone with any experience using Change Data Capture and Change Tracking save me a lot of time, or confirm that I am spending time looking at the right tool? The critical part of our audit trail is capturing all changes to a table's fields (on INSERT, UPDATE, DELETE), when it happened, and who did it. These changes are commonly provided to an end user chronologically via an audit trail report. Which is another question ... Change Data Capture or Change Tracking is the solution, I'd assume that this data can be queried just like data from a normal table? EDIT: I need a permanent audit trail, irregardless of time. I see that Change Data Capture has to do with the transaction logs, so this sounds finite to me.

    Read the article

  • what data structure should I use for hash lookup as well as binary search?

    - by zebraman
    I am working on a school homework. I have a list of names. I want to be able to perform binary search on these names (find all names between a lower and upper bound) for first name as well as last name, and perform keyword searches as well (this will be accomplished using hashing. For example, if I have the names Garfield Cat Snoopy Dog Captain Crunch Fat Cat then a binary search of first names (C,H) will return Captain Crunch, Fat Cat, and Garfield Cat. A binary search of last names (Cr,D) will return Captain Crunch. A keyword search of 'cat' will return Fat Cat and Garfield Cat. I understand binary search will only work on a sorted list, but since I am planning on searching two different criteria, I will have to sort the list by last name or first name depending on what I'm searching for. I feel like it will be too inefficient to have to resort the list each time I want to perform a new binary search. Would it just be better for me to set up and maintain two sorted lists (one for sorted by first name, one for sorted by last name)? Also, for hashing, will I have to set up a different table of names for that as well? I understand each keyword will hash to some value determined by a hash function, and this value (or key) is a table address where the corresponding names are stored. So I just want to know what would be the best way to solve this problem? Maintaining separate structures, or is there a way to efficiently do everything I want with just one data structure?

    Read the article

  • Is a red-black tree my ideal data structure?

    - by Hugo van der Sanden
    I have a collection of items (big rationals) that I'll be processing. In each case, processing will consist of removing the smallest item in the collection, doing some work, and then adding 0-2 new items (which will always be larger than the removed item). The collection will be initialised with one item, and work will continue until it is empty. I'm not sure what size the collection is likely to reach, but I'd expect in the range 1M-100M items. I will not need to locate any item other than the smallest. I'm currently planning to use a red-black tree, possibly tweaked to keep a pointer to the smallest item. However I've never used one before, and I'm unsure whether my pattern of use fits its characteristics well. 1) Is there a danger the pattern of deletion from the left + random insertion will affect performance, eg by requiring a significantly higher number of rotations than random deletion would? Or will delete and insert operations still be O(log n) with this pattern of use? 2) Would some other data structure give me better performance, either because of the deletion pattern or taking advantage of the fact I only ever need to find the smallest item? Update: glad I asked, the binary heap is clearly a better solution for this case, and as promised turned out to be very easy to implement. Hugo

    Read the article

  • What is a data structure for quickly finding non-empty intersections of a list of sets?

    - by Andrey Fedorov
    I have a set of N items, which are sets of integers, let's assume it's ordered and call it I[1..N]. Given a candidate set, I need to find the subset of I which have non-empty intersections with the candidate. So, for example, if: I = [{1,2}, {2,3}, {4,5}] I'm looking to define valid_items(items, candidate), such that: valid_items(I, {1}) == {1} valid_items(I, {2}) == {1, 2} valid_items(I, {3,4}) == {2, 3} I'm trying to optimize for one given set I and a variable candidate sets. Currently I am doing this by caching items_containing[n] = {the sets which contain n}. In the above example, that would be: items_containing = [{}, {1}, {1,2}, {2}, {3}, {3}] That is, 0 is contained in no items, 1 is contained in item 1, 2 is contained in itmes 1 and 2, 2 is contained in item 2, 3 is contained in item 2, and 4 and 5 are contained in item 3. That way, I can define valid_items(I, candidate) = union(items_containing[n] for n in candidate). Is there any more efficient data structure (of a reasonable size) for caching the result of this union? The obvious example of space 2^N is not acceptable, but N or N*log(N) would be.

    Read the article

  • How to structure the tables of a very simple blog in MySQL?

    - by Programmer
    I want to add a very simple blog feature on one of my existing LAMP sites. It would be tied to a user's existing profile, and they would be able to simply input a title and a body for each post in their blog, and the date would be automatically set upon submission. They would be allowed to edit and delete any blog post and title at any time. The blog would be displayed from most recent to oldest, perhaps 20 posts to a page, with proper pagination above that. Other users would be able to leave comments on each post, which the blog owner would be allowed to delete, but not pre-moderate. That's basically it. Like I said, very simple. How should I structure the MySQL tables for this? I'm assuming that since there will be blog posts and comments, I would need a separate table for each, is that correct? But then what columns would I need in each table, what data structures should I use, and how should I link the two tables together (e.g. any foreign keys)? I could not find any tutorials for something like this, and what I'm looking to do is really offer my users the simplest version of a blog possible. No tags, no moderation, no images, no fancy formatting, etc. Just a simple diary-type, pure-text blog with commenting by other users.

    Read the article

  • Interesting Scala typing solution, doesn't work in 2.7.7?

    - by djc
    I'm trying to build some image algebra code that can work with images (basically a linear pixel buffer + dimensions) that have different types for the pixel. To get this to work, I've defined a parametrized Pixel trait with a few methods that should be able to get used with any Pixel subclass. (For now, I'm only interested in operations that work on the same Pixel type.) Here it is: trait Pixel[T <: Pixel[T]] { def mul(v: Double): T def max(v: T): T def div(v: Double): T def div(v: T): T } Now I define a single Pixel type that has storage based on three doubles (basically RGB 0.0-1.0), I've called it TripleDoublePixel: class TripleDoublePixel(v: Array[Double]) extends Pixel[TripleDoublePixel] { var data: Array[Double] = v def this() = this(Array(0.0, 0.0, 0.0)) def toString(): String = { "(" + data(0) + ", " + data(1) + ", " + data(2) + ")" } def increment(v: TripleDoublePixel) { data(0) += v.data(0) data(1) += v.data(1) data(2) += v.data(2) } def mul(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x * v)) } def div(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x / v)) } def div(v: TripleDoublePixel): TripleDoublePixel = { var tmp = new Array[Double](3) tmp(0) = data(0) / v.data(0) tmp(1) = data(1) / v.data(1) tmp(2) = data(2) / v.data(2) new TripleDoublePixel(tmp) } def max(v: TripleDoublePixel): TripleDoublePixel = { val lv = data(0) * data(0) + data(1) * data(1) + data(2) * data(2) val vv = v.data(0) * v.data(0) + v.data(1) * v.data(1) + v.data(2) * v.data(2) if (lv > vv) (this) else v } } Now I want to write code to use this, that doesn't have to know what type the pixels are. For example: def idiv[T](a: Image[T], b: Image[T]) { for (i <- 0 until a.data.size) { a.data(i) = a.data(i).div(b.data(i)) } } Unfortunately, this doesn't compile: (fragment of lindet-gen.scala):145: error: value div is not a member of T a.data(i) = a.data(i).div(b.data(i)) I was told in #scala that this worked for someone else, but that was on 2.8. I've tried to get 2.8-rc1 going, but it doesn't compile for me. Is there any way to get this to work in 2.7.7?

    Read the article

  • PHP : If...Else...Query

    - by Rachel
    I am executing this statement under while (($data=fgetcsv($this->fin,5000,";"))!==FALSE) Now what I want in else loop is to throw exception only for data value which did not satisfy the if condition. Right now am displaying the complete row as I am not sure how to throw exception only for data which does not satisfy the value. Code if ((strtotime($data[11]) &&strtotime($data[12])&&strtotime($data[16]))!==FALSE && ctype_digit($data[0]) && ctype_alnum($data[1]) && ctype_digit($data[2]) && ctype_alnum($data[3]) && ctype_alnum($data[4]) && ctype_alnum($data[5]) && ctype_alnum($data[6]) && ctype_alnum($data[7]) && ctype_alnum($data[8]) && $this->_is_valid($data[9]) && ctype_digit($data[10]) && ctype_digit($data[13]) && $this->_is_valid($data[14])) { //Some Logic } else { throw new Exception ("Data {$data[0], $data[1], $data[2], $data[3], $data[4], $data[5], $data[6], $data[7], $data[8], $data[9], $data[10], $data[11], $data[12], $data[13], $data[14], $data[16]} is not in valid format"); } Guidance would be highly appreciated as to how can I throw exception only for data which did not satisfy the if value.

    Read the article

  • SQL SERVER – Copy Data from One Table to Another Table – SQL in Sixty Seconds #031 – Video

    - by pinaldave
    Copy data from one table to another table is one of the most requested questions on forums, Facebook and Twitter. The question has come in many formats and there are places I have seen developers are using cursor instead of this direct method. Earlier I have written the similar article a few years ago - SQL SERVER – Insert Data From One Table to Another Table – INSERT INTO SELECT – SELECT INTO TABLE. The article has been very popular and I have received many interesting and constructive comments. However there were two specific comments keep on ending up on my mailbox. 1) SQL Server AdventureWorks Samples Database does not have table I used in the example 2) If there is a video tutorial of the same example. After carefully thinking I decided to build a new set of the scripts for the example which are very similar to the old one as well video tutorial of the same. There was no better place than our SQL in Sixty Second Series to cover this interesting small concept. Let me know what you think of this video. Here is the updated script. -- Method 1 : INSERT INTO SELECT USE AdventureWorks2012 GO ----Create TestTable CREATE TABLE TestTable (FirstName VARCHAR(100), LastName VARCHAR(100)) ----INSERT INTO TestTable using SELECT INSERT INTO TestTable (FirstName, LastName) SELECT FirstName, LastName FROM Person.Person WHERE EmailPromotion = 2 ----Verify that Data in TestTable SELECT FirstName, LastName FROM TestTable ----Clean Up Database DROP TABLE TestTable GO --------------------------------------------------------- --------------------------------------------------------- -- Method 2 : SELECT INTO USE AdventureWorks2012 GO ----Create new table and insert into table using SELECT INSERT SELECT FirstName, LastName INTO TestTable FROM Person.Person WHERE EmailPromotion = 2 ----Verify that Data in TestTable SELECT FirstName, LastName FROM TestTable ----Clean Up Database DROP TABLE TestTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – Insert Data From One Table to Another Table – INSERT INTO SELECT – SELECT INTO TABLE Powershell – Importing CSV File Into Database – Video SQL SERVER – 2005 – Export Data From SQL Server 2005 to Microsoft Excel Datasheet SQL SERVER – Import CSV File into Database Table Using SSIS SQL SERVER – Import CSV File Into SQL Server Using Bulk Insert – Load Comma Delimited File Into SQL Server SQL SERVER – 2005 – Generate Script with Data from Database – Database Publishing Wizard What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • Data Generator Source Adapter

    This component needs little explanation. It generates random integer (DT_I4) and string (DT_WSTR) data and places them in the pipeline. You specify how many columns of each you would like and for any string columns you pass a fixed length value. You then need to specify how many rows in total you require to be generated. This component is used by us to do testing of the pipeline and components downstream. Previously we would have used a script component (as a source) to generate the rows but found ourselves rewriting the code too often so created this component. Screenshots SQL Server 2005 Integration Services SQL Server 2008/2012 Integration Services The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Data Generator Source from the list. Downloads The Data Generator Source Adapter is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Data Generator Source Adapter for SQL Server 2005 Data Generator Source Adapter for SQL Server 2008 Data Generator Source Adapter for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.30 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.29 - SQL Server 2008 February 2008 CTP. Includes support for upgrade of 2005 packages. Simplified user interface. (4 Mar 2008) Version 2.0.0.27 - SQL Server 2008 November 2007 CTP. String columns will now use the default system code page. Previously string columns always used 1252. (15 Feb 2008) SQL Server 2005 Version 1.1.0.23 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. (12 Jun 2006) Version 1.0.0.0 - SQL Server 2005 IDW 16 Sept CTP. Public release. (6 Oct 2005)

    Read the article

  • Data Governance (Veri Yönetisimi)

    - by Arda Eralp
    Data governance,veri ile ilgili islemler için bir sorumluluklar sistemidir. Bu sistemin temelini ise politikalar, standartlar ve prosedürler olusturur. Sistem politikalar, standartlar ve prosedürler sayesinde verinin ne zaman, hangi sartlar altinda, hangi eylemlerde, hangi yöntemler ile kimler tarafindan kullanilacagina karar verir. Sistemin kurumda basarili bir sekilde islemesi için öncelikle kurumda farkindalik saglanmasi gereklidir. Farkindalik saglandiktan sonra ise kurum governance ve mimari kültürünü benimsemelidir. Ancak bu sartlar altinda sistem basarili bir sekilde isleyebilecektir. Bu sebeplerden dolayidir ki data governance kisa bir süreç degil, aksine kurum varligini sürdürdügü sürece isleyecek olan bir süreçtir. Bu durum bize data governance in bir proje degil bir program oldugunu açiklamaktadir. Programin baslangicinda kurumun ihtiyaçlarinin netlesmesi ve farkindaligin saglanmasi temeldir. Hedef kitle ise, veri ile dogrudan ve ya dolayli olarak iliski içerisinde olan herkesdir. Bu sebeple programin baslangicinda hedef kitleyi içeren ekipler ile toplantilar düzenlenecektir. Bu toplantilar sayesinde hem farkindalik saglanacak hemde ekiplerin ihtiyaçlari birebir ekipler tarafindan aktarilarak netlesecektir. Hedef kitlenin ihtiyaçlari netlestirildikten sonra ise devamli isleyecek olan bu sürecin planlamasi yapilacaktir. Bu sürecin planlanmasinda ihtiyaçlarin önceliklendirilmesi gerekmektedir. Sebebi ise her ekibin ihtiyaçlarinin farkli olabilecegi ve bütün ihtiyaçlara ayni anda karsilik verilemeyebileceginin öngörülmesidir. Bu öngörünün temeli ise ekiplerin ihtiyaçlarinin birbirleriyle olan baglantisidir. Süreç planlamasinda ihtiyaçlarin önceliklendirilmesinin ardindan kurumun büyüklügünün gözönünde bulundurulmasi gerekmedir. Kurumun büyüklügünün önemi ise eger kurum bir bütün olarak ayni anda govern edilemeyecek kadar büyük ise ihtiyaçlari öncelikli olarak bulunan ekipler ile govern edilmesine baslanarak sürecin belirli bir hiz ile bütün kurumda isler hale getirilmesini saglamaktir. Ihtiyaçlar belirlendikten ve ilgili ekipler seçildikten sonra artik programin planlanmasina geçilebilecek. Programin planlama asamasinda öncelikli olarak sürecin asamalarini kontrol edecek ve süreç kurum içerisinde isleyise geçtiginde kontrolü saglayacak olan Data Governance Office in planlanmasidir. Office in planlanmasiyla birlikte süreçteki roller ve bu rollerin sorumluluklari belirlenecektir. Planlama asamasinda Data governance office, roller ve sorumluluklar, güvenlik ve veri saklanan sistemler ele alinacak konulardir. Planlama asamasi tamamlandiginda ise belirlenen ekipler ve ihtiyaçlar dogrultusunda programin isleyis asamasina geçilebilecektir. Isleyis kisminda ekibin ihtiyaçlari dogrultusunda güvenlik konusunda ve veri saklanan sistemler üzerinde çalismalar yapilacaktir. Bu yapilan çalismalar bir süreç olarak dökümante edilecek ve süreç sona erdiginde baska bir ekiple baska bir ihtiyaç dogrultusunda çalisma yapilarak ayni süreç isletilecek ve böylece kurum içesinde ilgili süreçte standartlasma saglanacaktir. Güvenlik konusunda verinin erisim güvenligi ve kullanim güvenligi ele alinacaktir. Veri saklanan sistemler üzerindeki çalismalar ise saklanan sistemlerin program dahilinde belirlenen standartlar ile olusturulmasi ve yönetilmesi saglanacaktir. Isleyis kisminin ardindan ise programin izleme kismina geçilecektir. Bu kisimda artik Data Governance Office olusmus, politikalar, standartlar ve prosedürler belirlenmistir. Ve Data Governance Office çalisanlari rolleri ve sorumluluklari dahilinde programin isleyisini izleyecek ve gerek gördügünde politikalar standartlar ve prosedürler üzerinde degisiklikler yapacaklardir.

    Read the article

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

  • Strategy for avoiding duplicate object ids for data shared across devices using iCloud

    - by rmaddy
    I have a data intensive iOS app that is not using CoreData nor does it support iCloud synching (yet). All of my objects are created with unique keys. I use a simple long long initialized with the current time. Then as I need a new key I increment the value by 1. This has all worked well for a few years with the app running isolated on a single device. Now I want to add support for automatic data sync across devices using iCloud. As my app is written, there is the possibility that two objects created on two different devices could end up with the same key. I need to avoid this possibility. I'm looking for ideas for solving this issue. I have a few requirements that the solution must meet: 1) The key needs to remain a single integral data type. Converting all existing keys to a compound key or to a string or other type would affect the entire code base and likely result in more bugs than it's worth. 2) The solution can't depend on an Internet connection. A user must be able to run the app and add data even with no Internet connection. The data should still resolve properly later when the data syncs through iCloud once a connection is available. I'll accept one exception to this rule. If no other option is available, I may be open to requiring an Internet connection the first time the app's data is initialized. One idea I have been toying around with in my head is logically splitting the integer key into two parts. The high 4 or 5 bits could be used as some sort of device id while the rest represents the actual key. The fuzzy part is figuring out how to come up with non-conflicting device ids that fit in a few bits. This should be viable since I don't need to deal will millions of devices. I just need to deal with the few devices that would be shared by a given iCloud account. I'm open to suggestions. Thanks.

    Read the article

  • How can you tell whether to use Composite Pattern or a Tree Structure, or a third implementation?

    - by Aske B.
    I have two client types, an "Observer"-type and a "Subject"-type. They're both associated with a hierarchy of groups. The Observer will receive (calendar) data from the groups it is associated with throughout the different hierarchies. This data is calculated by combining data from 'parent' groups of the group trying to collect data (each group can have only one parent). The Subject will be able to create the data (that the Observers will receive) in the groups they're associated with. When data is created in a group, all 'children' of the group will have the data as well, and they will be able to make their own version of a specific area of the data, but still linked to the original data created (in my specific implementation, the original data will contain time-period(s) and headline, while the subgroups specify the rest of the data for the receivers directly linked to their respective groups). However, when the Subject creates data, it has to check if all affected Observers have any data that conflicts with this, which means a huge recursive function, as far as I can understand. So I think this can be summed up to the fact that I need to be able to have a hierarchy that you can go up and down in, and some places be able to treat them as a whole (recursion, basically). Also, I'm not just aiming at a solution that works. I'm hoping to find a solution that is relatively easy to understand (architecture-wise at least) and also flexible enough to be able to easily receive additional functionality in the future. Is there a design pattern, or a good practice to go by, to solve this problem or similar hierarchy problems? EDIT: Here's the design I have: The "Phoenix"-class is named that way because I didn't think of an appropriate name yet. But besides this I need to be able to hide specific activities for specific observers, even though they are attached to them through the groups. A little Off-topic: Personally, I feel that I should be able to chop this problem down to smaller problems, but it escapes me how. I think it's because it involves multiple recursive functionalities that aren't associated with each other and different client types that needs to get information in different ways. I can't really wrap my head around it. If anyone can guide me in a direction of how to become better at encapsulating hierarchy problems, I'd be very glad to receive that as well.

    Read the article

  • JGoodies HashMap

    - by JohnMcClane
    Hi, I'm trying to build a chart program using presentation model. Using JGoodies for data binding was relatively easy for simple types like strings or numbers. But I can't figure out how to use it on a hashmap. I'll try to explain how the chart works and what my problem is: A chart consists of DataSeries, a DataSeries consists of DataPoints. I want to have a data model and to be able to use different views on the same model (e.g. bar chart, pie chart,...). Each of them consists of three classes. For example: DataPointModel: holds the data model (value, label, category) DataPointViewModel: extends JGoodies PresentationModel. wraps around DataPointModel and holds view properties like font and color. DataPoint: abstract class, extends JComponent. Different Views must subclass and implement their own ui. Binding and creating the data model was easy, but i don't know how to bind my data series model. package at.onscreen.chart; import java.beans.PropertyChangeListener; import java.beans.PropertyChangeSupport; import java.beans.PropertyVetoException; import java.util.Collection; import java.util.HashMap; import java.util.Iterator; public class DataSeriesModel { public static String PROPERTY_DATAPOINT = "dataPoint"; public static String PROPERTY_DATAPOINTS = "dataPoints"; public static String PROPERTY_LABEL = "label"; public static String PROPERTY_MAXVALUE = "maxValue"; /** * holds the data points */ private HashMap dataPoints; /** * the label for the data series */ private String label; /** * the maximum data point value */ private Double maxValue; /** * the model supports property change notification */ private PropertyChangeSupport propertyChangeSupport; /** * default constructor */ public DataSeriesModel() { this.maxValue = Double.valueOf(0); this.dataPoints = new HashMap(); this.propertyChangeSupport = new PropertyChangeSupport(this); } /** * constructor * @param label - the series label */ public DataSeriesModel(String label) { this.dataPoints = new HashMap(); this.maxValue = Double.valueOf(0); this.label = label; this.propertyChangeSupport = new PropertyChangeSupport(this); } /** * full constructor * @param label - the series label * @param dataPoints - an array of data points */ public DataSeriesModel(String label, DataPoint[] dataPoints) { this.dataPoints = new HashMap(); this.propertyChangeSupport = new PropertyChangeSupport(this); this.maxValue = Double.valueOf(0); this.label = label; for (int i = 0; i < dataPoints.length; i++) { this.addDataPoint(dataPoints[i]); } } /** * full constructor * @param label - the series label * @param dataPoints - a collection of data points */ public DataSeriesModel(String label, Collection dataPoints) { this.dataPoints = new HashMap(); this.propertyChangeSupport = new PropertyChangeSupport(this); this.maxValue = Double.valueOf(0); this.label = label; for (Iterator it = dataPoints.iterator(); it.hasNext();) { this.addDataPoint(it.next()); } } /** * adds a new data point to the series. if the series contains a data point with same id, it will be replaced by the new one. * @param dataPoint - the data point */ public void addDataPoint(DataPoint dataPoint) { String category = dataPoint.getCategory(); DataPoint oldDataPoint = this.getDataPoint(category); this.dataPoints.put(category, dataPoint); this.setMaxValue(Math.max(this.maxValue, dataPoint.getValue())); this.propertyChangeSupport.firePropertyChange(PROPERTY_DATAPOINT, oldDataPoint, dataPoint); } /** * returns the data point with given id or null if not found * @param uid - the id of the data point * @return the data point or null if there is no such point in the table */ public DataPoint getDataPoint(String category) { return this.dataPoints.get(category); } /** * removes the data point with given id from the series, if present * @param category - the data point to remove */ public void removeDataPoint(String category) { DataPoint dataPoint = this.getDataPoint(category); this.dataPoints.remove(category); if (dataPoint != null) { if (dataPoint.getValue() == this.getMaxValue()) { Double maxValue = Double.valueOf(0); for (Iterator it = this.iterator(); it.hasNext();) { DataPoint itDataPoint = it.next(); maxValue = Math.max(itDataPoint.getValue(), maxValue); } this.setMaxValue(maxValue); } } this.propertyChangeSupport.firePropertyChange(PROPERTY_DATAPOINT, dataPoint, null); } /** * removes all data points from the series * @throws PropertyVetoException */ public void removeAll() { this.setMaxValue(Double.valueOf(0)); this.dataPoints.clear(); this.propertyChangeSupport.firePropertyChange(PROPERTY_DATAPOINTS, this.getDataPoints(), null); } /** * returns the maximum of all data point values * @return the maximum of all data points */ public Double getMaxValue() { return this.maxValue; } /** * sets the max value * @param maxValue - the max value */ protected void setMaxValue(Double maxValue) { Double oldMaxValue = this.getMaxValue(); this.maxValue = maxValue; this.propertyChangeSupport.firePropertyChange(PROPERTY_MAXVALUE, oldMaxValue, maxValue); } /** * returns true if there is a data point with given category * @param category - the data point category * @return true if there is a data point with given category, otherwise false */ public boolean contains(String category) { return this.dataPoints.containsKey(category); } /** * returns the label for the series * @return the label for the series */ public String getLabel() { return this.label; } /** * returns an iterator over the data points * @return an iterator over the data points */ public Iterator iterator() { return this.dataPoints.values().iterator(); } /** * returns a collection of the data points. the collection supports removal, but does not support adding of data points. * @return a collection of data points */ public Collection getDataPoints() { return this.dataPoints.values(); } /** * returns the number of data points in the series * @return the number of data points */ public int getSize() { return this.dataPoints.size(); } /** * adds a PropertyChangeListener * @param listener - the listener */ public void addPropertyChangeListener(PropertyChangeListener listener) { this.propertyChangeSupport.addPropertyChangeListener(listener); } /** * removes a PropertyChangeListener * @param listener - the listener */ public void removePropertyChangeListener(PropertyChangeListener listener) { this.propertyChangeSupport.removePropertyChangeListener(listener); } } package at.onscreen.chart; import java.beans.PropertyVetoException; import java.util.Collection; import java.util.Iterator; import com.jgoodies.binding.PresentationModel; public class DataSeriesViewModel extends PresentationModel { /** * default constructor */ public DataSeriesViewModel() { super(new DataSeriesModel()); } /** * constructor * @param label - the series label */ public DataSeriesViewModel(String label) { super(new DataSeriesModel(label)); } /** * full constructor * @param label - the series label * @param dataPoints - an array of data points */ public DataSeriesViewModel(String label, DataPoint[] dataPoints) { super(new DataSeriesModel(label, dataPoints)); } /** * full constructor * @param label - the series label * @param dataPoints - a collection of data points */ public DataSeriesViewModel(String label, Collection dataPoints) { super(new DataSeriesModel(label, dataPoints)); } /** * full constructor * @param model - the data series model */ public DataSeriesViewModel(DataSeriesModel model) { super(model); } /** * adds a data point to the series * @param dataPoint - the data point */ public void addDataPoint(DataPoint dataPoint) { this.getBean().addDataPoint(dataPoint); } /** * returns true if there is a data point with given category * @param category - the data point category * @return true if there is a data point with given category, otherwise false */ public boolean contains(String category) { return this.getBean().contains(category); } /** * returns the data point with given id or null if not found * @param uid - the id of the data point * @return the data point or null if there is no such point in the table */ public DataPoint getDataPoint(String category) { return this.getBean().getDataPoint(category); } /** * returns a collection of the data points. the collection supports removal, but does not support adding of data points. * @return a collection of data points */ public Collection getDataPoints() { return this.getBean().getDataPoints(); } /** * returns the label for the series * @return the label for the series */ public String getLabel() { return this.getBean().getLabel(); } /** * sets the max value * @param maxValue - the max value */ public Double getMaxValue() { return this.getBean().getMaxValue(); } /** * returns the number of data points in the series * @return the number of data points */ public int getSize() { return this.getBean().getSize(); } /** * returns an iterator over the data points * @return an iterator over the data points */ public Iterator iterator() { return this.getBean().iterator(); } /** * removes all data points from the series * @throws PropertyVetoException */ public void removeAll() { this.getBean().removeAll(); } /** * removes the data point with given id from the series, if present * @param category - the data point to remove */ public void removeDataPoint(String category) { this.getBean().removeDataPoint(category); } } package at.onscreen.chart; import java.beans.PropertyChangeEvent; import java.beans.PropertyChangeListener; import java.beans.PropertyVetoException; import java.util.Collection; import java.util.Iterator; import javax.swing.JComponent; public abstract class DataSeries extends JComponent implements PropertyChangeListener { /** * the model */ private DataSeriesViewModel model; /** * default constructor */ public DataSeries() { this.model = new DataSeriesViewModel(); this.model.addPropertyChangeListener(this); this.createComponents(); } /** * constructor * @param label - the series label */ public DataSeries(String label) { this.model = new DataSeriesViewModel(label); this.model.addPropertyChangeListener(this); this.createComponents(); } /** * full constructor * @param label - the series label * @param dataPoints - an array of data points */ public DataSeries(String label, DataPoint[] dataPoints) { this.model = new DataSeriesViewModel(label, dataPoints); this.model.addPropertyChangeListener(this); this.createComponents(); } /** * full constructor * @param label - the series label * @param dataPoints - a collection of data points */ public DataSeries(String label, Collection dataPoints) { this.model = new DataSeriesViewModel(label, dataPoints); this.model.addPropertyChangeListener(this); this.createComponents(); } /** * full constructor * @param model - the model */ public DataSeries(DataSeriesViewModel model) { this.model = model; this.model.addPropertyChangeListener(this); this.createComponents(); } /** * creates, binds and configures UI components. * data point properties can be created here as components or be painted in paintComponent. */ protected abstract void createComponents(); @Override public void propertyChange(PropertyChangeEvent evt) { this.repaint(); } /** * adds a data point to the series * @param dataPoint - the data point */ public void addDataPoint(DataPoint dataPoint) { this.model.addDataPoint(dataPoint); } /** * returns true if there is a data point with given category * @param category - the data point category * @return true if there is a data point with given category, otherwise false */ public boolean contains(String category) { return this.model.contains(category); } /** * returns the data point with given id or null if not found * @param uid - the id of the data point * @return the data point or null if there is no such point in the table */ public DataPoint getDataPoint(String category) { return this.model.getDataPoint(category); } /** * returns a collection of the data points. the collection supports removal, but does not support adding of data points. * @return a collection of data points */ public Collection getDataPoints() { return this.model.getDataPoints(); } /** * returns the label for the series * @return the label for the series */ public String getLabel() { return this.model.getLabel(); } /** * sets the max value * @param maxValue - the max value */ public Double getMaxValue() { return this.model.getMaxValue(); } /** * returns the number of data points in the series * @return the number of data points */ public int getDataPointCount() { return this.model.getSize(); } /** * returns an iterator over the data points * @return an iterator over the data points */ public Iterator iterator() { return this.model.iterator(); } /** * removes all data points from the series * @throws PropertyVetoException */ public void removeAll() { this.model.removeAll(); } /** * removes the data point with given id from the series, if present * @param category - the data point to remove */ public void removeDataPoint(String category) { this.model.removeDataPoint(category); } /** * returns the data series view model * @return - the data series view model */ public DataSeriesViewModel getViewModel() { return this.model; } /** * returns the data series model * @return - the data series model */ public DataSeriesModel getModel() { return this.model.getBean(); } } package at.onscreen.chart.builder; import java.util.Collection; import net.miginfocom.swing.MigLayout; import at.onscreen.chart.DataPoint; import at.onscreen.chart.DataSeries; import at.onscreen.chart.DataSeriesViewModel; public class BuilderDataSeries extends DataSeries { /** * default constructor */ public BuilderDataSeries() { super(); } /** * constructor * @param label - the series label */ public BuilderDataSeries(String label) { super(label); } /** * full constructor * @param label - the series label * @param dataPoints - an array of data points */ public BuilderDataSeries(String label, DataPoint[] dataPoints) { super(label, dataPoints); } /** * full constructor * @param label - the series label * @param dataPoints - a collection of data points */ public BuilderDataSeries(String label, Collection dataPoints) { super(label, dataPoints); } /** * full constructor * @param model - the model */ public BuilderDataSeries(DataSeriesViewModel model) { super(model); } @Override protected void createComponents() { this.setLayout(new MigLayout()); /* * * I want to add a new BuilderDataPoint for each data point in the model. * I want the BuilderDataPoints to be synchronized with the model. * e.g. when a data point is removed from the model, the BuilderDataPoint shall be removed * from the BuilderDataSeries * */ } } package at.onscreen.chart.builder; import javax.swing.JFormattedTextField; import javax.swing.JTextField; import at.onscreen.chart.DataPoint; import at.onscreen.chart.DataPointModel; import at.onscreen.chart.DataPointViewModel; import at.onscreen.chart.ValueFormat; import com.jgoodies.binding.adapter.BasicComponentFactory; import com.jgoodies.binding.beans.BeanAdapter; public class BuilderDataPoint extends DataPoint { /** * default constructor */ public BuilderDataPoint() { super(); } /** * constructor * @param category - the category */ public BuilderDataPoint(String category) { super(category); } /** * constructor * @param value - the value * @param label - the label * @param category - the category */ public BuilderDataPoint(Double value, String label, String category) { super(value, label, category); } /** * full constructor * @param model - the model */ public BuilderDataPoint(DataPointViewModel model) { super(model); } @Override protected void createComponents() { BeanAdapter beanAdapter = new BeanAdapter(this.getModel(), true); ValueFormat format = new ValueFormat(); JFormattedTextField value = BasicComponentFactory.createFormattedTextField(beanAdapter.getValueModel(DataPointModel.PROPERTY_VALUE), format); this.add(value, "w 80, growx, wrap"); JTextField label = BasicComponentFactory.createTextField(beanAdapter.getValueModel(DataPointModel.PROPERTY_LABEL)); this.add(label, "growx, wrap"); JTextField category = BasicComponentFactory.createTextField(beanAdapter.getValueModel(DataPointModel.PROPERTY_CATEGORY)); this.add(category, "growx, wrap"); } } To sum it up: I need to know how to bind a hash map property to JComponent.components property. JGoodies is in my opinion not very well documented, I spent a long time searching through the internet, but I did not find any solution to my problem. Hope you can help me.

    Read the article

  • Getting information of a class structure at runtime using Java reflection

    - by Eliza Sahoo
    A simple way to get the class structure information of the Class we are using in our application. Step1: Create the Class object of the class we want to explore at run time or the class which is getting added at runtime. Step2: As we are looking for the structure of the class, get all Constructors, Fields and methods by invoking respective functions of the Class. Step3: print them in java console or use them as required. Eliza

    Read the article

  • Dynamically create hierachical tableview from file structure

    - by Rob M
    Hi, I have a directory structure of directories and files that I want to render as a tableview in an iphone app. I'd rather have this generated dynamically rather than have to maintain a XML PLIST or whatever. i.e. replicate the Explorer functionality in windows where a user can traverse the structure and then select bundled image files for viewing. Is there any way to do this? Any advice would be appreicated.

    Read the article

  • Ruby data structure to render a certain JSON format

    - by dt
    [{"id":"123","name":"House"}, {"id":"1456","name":"Desperate Housewives"}, {"id":"789","name":"Dollhouse"}, {"id":"10","name":"Full House"} ] How can I render to produce this JSON format from within Ruby? I have all the data from the DB (@result) and don't know what data structure to use in Ruby that will render to this when I do this: respond_to do |format| format.json { render :json = @result} end What data structure should @result be and how can I iterate to produce it? Thank you!

    Read the article

  • How to get an hierarchical php structure from a db table, in php array, or JSON

    - by daniel
    Hi guys, can you please help me. How to get an hierarchical php structure from a db table, in php array, or JSON, but with the following format: [{ "attributes" : {"id" : "111"}, "data" : "Some node title", "children" : [ { "attributes" : { "id" : "555"}, "data" : "A sub node title here" } ], "state" : "open" }, { "attributes" : {"id" : "222"}, "data" : "Other main node", "children" : [ { "attributes" : { "id" : "666"}, "data" : "Another sub node" } ], "state" : "open" }] My SQL table contains the fields: ID, PARENT, ORDER, TITLE Can you please help me with this? I'm going crazy trying to get this. Many thanks in advance. Daniel

    Read the article

  • Convert JSON structure

    - by Scobal
    I have a set of data in one JSON structure: [[task1, 10, 99], [task2, 10, 99], [task3, 10, 99], [task1, 11, 99], [task2, 11, 99], [task3, 11, 99]] and need to convert it to another JSON structure: [{ label: "task1", data: [[10, 99], [11, 99]] }, { label: "task2", data: [[10, 99], [11, 99]] }, { label: "task3", data: [[10, 99], [11, 99]] }] What's the best way to do this with javascript/java?

    Read the article

  • What is a good project structure in C

    - by mohit
    Hi, I have been working in Java/J2ee projects, in which I follow the Maven structure. I want to develop [say a command line interpreter in linux {ubuntu}] in C. I never develop projects in C. I want to know what project structure I should follow.

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >