Search Results

Search found 28043 results on 1122 pages for 'sql replication'.

Page 664/1122 | < Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >

  • What does P mean in Sort(Expression<Func<T, P>> expr, ListSortDirection direction)?

    - by Grasshopper
    I am attempting to use the answer in post: How do you sort an EntitySet<T> to expose an interface so that I can sort an EntitySet with a Binding list. I have created the class below and I get the following compiler error: "The type or namespace 'P' could not be found (are you missing a using directive or assembly reference?). Can someone tell me what the P means and which namespace I need to include to get the method below to compile? I am very new to delegates and lamba expressions. Also, can someone confirm that if I create a BindingList from my EntitySet that any modifications I make to the BindingList will be made to the EntitySet? Basically, I have an EntitySet that I need to sort and make changes to. Then, I will need to persist these changes using the original Entity that the BindingList came from. public class EntitySetBindingWrapper<T> : BindingList<T> { public EntitySetBindingWrapper(BindingList<T> root) : base(root) { } public void Sort(Expression<Func<T, P>> expr, ListSortDirection direction) { if (expr == null) base.RemoveSortCore(); MemberExpression propExpr = expr as MemberExpression; if (propExpr == null) throw new ArgumentException("You must provide a property", "expr"); PropertyDescriptorCollection descriptorCol = TypeDescriptor.GetProperties(typeof(T)); IEnumerable<PropertyDescriptor> descriptors = descriptorCol.Cast<PropertyDescriptor>(); PropertyDescriptor descriptor = descriptors.First(pd => pd.Name == propExpr.Member.Name); base.ApplySortCore(descriptor, direction); } }

    Read the article

  • Adding Information in SQLite

    - by Cam
    Hi All, I am having trouble with my Android App when adding information into SQLite. I am relatively new to Java/SQLite and though I have followed a lot of tutorials on SQLite and have been able to get the example code to run I am unable to get tables to be created and data to import when running my own app. I have included my code in two Java files Questions (Main Program) and QuestionData (helper class represents the database). Questions.java: public class Questions extends Activity { private QuestionData questions; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.quiztest); questions = new QuestionData(this); try { Cursor cursor = getQuestions(); showQuestions(cursor); } finally { questions.close(); } } private Cursor getQuestions() { //Select Query String loadQuestions = "SELECT * FROM questionlist"; SQLiteDatabase db = questions.getReadableDatabase(); Cursor cursor = db.rawQuery(loadQuestions, null); startManagingCursor(cursor); return cursor; } private void showQuestions(Cursor cursor) { // Collect String Values from Query and Display them this part of the code is wokring fine when there is data present. QuestionData.java public class QuestionData extends SQLiteOpenHelper { private static final String DATABASE_NAME = "TriviaQuiz.db" ; private static final int DATABASE_VERSION = 2; public QuestionData(Context ctx) { super(ctx, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL("CREATE TABLE questionlist (_id INTEGER PRIMARY KEY AUTOINCREMENT, QID TEXT, QQuestion TEXT, QAnswer TEXT, QOption1 TEXT, QOption2 TEXT, QOption3 TEXT, QCategoryTagLvl1 TEXT, QCategoryTagLvl2 TEXT, QOptionalTag1 TEXT, QOptionalTag2 TEXT, QOptionalTag3 TEXT, QOptionalTag4 TEXT, QOptionalTag5 TEXT, QTimePeriod TEXT, QDifficultyRating TEXT, QGenderBias TEXT, QAgeBias TEXT, QRegion TEXT, QWikiLink TEXT, QValidationLink1 TEXT, QValidationLink2 TEXT, QHint TEXT, QLastValidation TEXT, QNotes TEXT, QMultimediaType TEXT, QMultimediaLink TEXT, QLastAsked TEXT);"); db.execSQL("INSERT INTO questionlist (_id, QID, QQuestion, QAnswer, QOption1, QOption2, QOption3, QCategoryTagLvl1, QCategoryTagLvl2, QOptionalTag1, QOptionalTag2, QOptionalTag3, QOptionalTag4, QOptionalTag5, QTimePeriod, QDifficultyRating, QGenderBias, QAgeBias, QRegion, QWikiLink, QValidationLink1, QValidationLink2, QHint, QLastValidation, QNotes, QMultimediaType, QMultimediaLink, QLastAsked)"+ "VALUES (null,'Q00001','Example','Ans1','Q1','Q2','Q3','Q4','','','','','','','','','','','','','','','','','','','','')"); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { db.execSQL("DROP TABLE IF EXISTS " + TABLE_NAME); onCreate(db); } } Any suggestions at all would be great. I have tried debugging which suggests that the database does not exist. Thanks in advance for your assistance.

    Read the article

  • Load Empty Database table

    - by john White
    I am using SQLexpress and VS2008. I have a DB with a table named "A", which has an IdentitySpecification column named ID. The ID is auto-incremented. Even if the row is deleted, the ID still increases. After several data manipulation, the current ID has reached 15, for example. When I run the application if there's at least 1 row: if I add a new row, the new ID is 16. Everything is fine. If the table is empty (no row): if I add a new row, the new ID is 0, which is an error (I think). And further data manipulation (eg. delete or update) will result in an unhandled exception. Has anyone encountered this? PS. In my table definition, the ID has been selected as follow: Identity Increment = 1; Identity Seed =1; The DB load code is: dataSet = gcnew DataSet(); dataAdapter->Fill(dataSet,"A"); dataTable=dataSet->Tables["A"]; dbConnection->Open(); The Update button method dataAdapter->Update(dataSet,"tblInFlow"); dataSet->AcceptChanges(); dataTable=dataSet->Tables["tblInFlow"]; dataGrid->DataSource=dataTable; If I press Update: if there's at least a row: the datagrid view updates and shows the table correctly. if there's nothing in the table (no data row), the Add method will add a new row, but from ID 0. If I close the program and restart it again: the ID would be 16, which is correct. This is the add method row=dataTable->NewRow(); row["column1"]="something"; dataTable->Rows->Add(row); dataAdapter->Update(dataSet,"A"); dataSet->AcceptChanges(); dataTable=dataSet->Tables["A"];

    Read the article

  • MySQL Query to find consecutive available times of variable lenth

    - by Armaconn
    I have an events table that has user_id, date ('2013-10-01'), time ('04:15:00'), and status_id; What I am looking to find is a solution similar to http://stackoverflow.com/questions/2665574/find-consecutive-rows-calculate-duration but I need I need two additional components: 1) Take date into consideration, so 10/1/2013 at 11:00 PM - 10/2/2013 at 3:00AM. Feel free to just put in a fake date range (like '2013-10-01' to '2013-10-31') 2) Limit output to only include when there are 4+ consecutive times (each event is 15 minutes and I want it to display minimum blocks of an hour, but would also like to be able to switch this restriction to 1.5 hours or some other duration if possible). SUMMARY - Looking for a query that provides the start and end times for a set of events that have the same user_id, status_id, and are in a continuous series based on date and time. For which I can restrict results based on date range and minimum series duration. So the output should have: user_id, date_start, time_start, date_end, time_end, status_id, duration CREATE TABLE `events` ( `event_id` int(11) NOT NULL auto_increment COMMENT 'ID', `user_id` int(11) NOT NULL, `date` date NOT NULL, `time` time NOT NULL, `status_id` int(11) default NULL, PRIMARY KEY (`event_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1568 ; INSERT INTO `events` VALUES(1, 101, '2013-08-14', '23:00:00', 2); INSERT INTO `events` VALUES(2, 101, '2013-08-14', '23:15:00', 2); INSERT INTO `events` VALUES(3, 101, '2013-08-14', '23:30:00', 2); INSERT INTO `events` VALUES(4, 101, '2013-08-14', '23:45:00', 2); INSERT INTO `events` VALUES(5, 101, '2013-08-15', '00:00:00', 2); INSERT INTO `events` VALUES(6, 101, '2013-08-15', '00:15:00', 1); INSERT INTO `events` VALUES(7, 500, '2013-08-14', '23:45:00', 1); INSERT INTO `events` VALUES(8, 500, '2013-08-15', '00:00:00', 1); INSERT INTO `events` VALUES(9, 500, '2013-08-15', '00:15:00', 2); INSERT INTO `events` VALUES(10, 500, '2013-08-15', '00:30:00', 2); INSERT INTO `events` VALUES(11, 500, '2013-08-15', '00:45:00', 1); Desired output row |user_id | date_start | time_start | date_end | time_end | status_id | duration 1 |101 |'2013-08-14'| '23:00:00' |'2013-08-15'|'00:15:00'| 2 | 5 2 |101 |'2013-08-15'| '00:00:15' |'2013-08-15'|'00:30:00'| 1 | 1 3 |500 |'2013-08-14'| '00:23:45' |'2013-08-15'|'00:15:00'| 1 | 2 4 |500 |'2013-08-15'| '00:00:15' |'2013-08-15'|'00:45:00'| 2 | 2 5 |500 |'2013-08-15'| '00:00:45' |'2013-08-15'|'01:00:00'| 2 | 1 *except that rows 2 and 5 wouldn't appear if duration had to be greater than 30 minutes Thanks for any help that you can provide! And please let me know if there is anything I can further clarify!!

    Read the article

  • How would I associate a "Note" class to 4+ classes without creating lookup table for each associatio

    - by Gthompson83
    Im creating a project tasklist application. I have project, section, task, issue classes, and would like to use one class to be able to add simple notes to any object instance of those classes. The task, issue tables both use a standard identity field as a primary key. The section table has a two field primary key. The project table has a single int primary key defined by the user. Is there a way to associate the note class with each of these without using a seperate lookup table for each class? I'm not so sure my original idea is a decent way to implement this. I considered the following (each variable mapping to a field n the notes table. Private _NoteId As Integer Private _ProjectId As Integer Private _SectionId As Integer Private _SectionId2 As Integer Private _TaskId As Integer Private _IssueId As Integer Private _Note As String Private _UserId As Guid Then I would be able to write seperate methods (getProjectNotes, getTaskNotes) to get notes attached to each class. I started writing this a few weeks ago but got pulled away before i could finish. When revisiting this code today my first thought "this is retarded". Thoughts on drawbacks to this design?

    Read the article

  • How to analyse Wikipedia article's data base with R?

    - by Tal Galili
    Hi all, This is a "big" question, that I don't know how to start, so I hope some of you can give me a direction. And if this is not a "good" question, I will close the thread with an apology. I wish to go through the database of Wikipedia (let's say the English one), and do statistics. For example, I am interested in how many active editors (which should be defined) Wikipedia had at each point of time (let's say in the last 2 years). I don't know how to build such a database, how to access it, how to know which types of data it has and so on. So my questions are: What tools do I need for this (besides basic R) ? MySQL on my computer? RODBC database connection? How do you start planning for such a project?

    Read the article

  • Search field based on multiple parameter

    - by Manoj Wadhwani
    Can anybody modify this , when i insert Emp. name it go to first search and it does not check other paramete could you plz modify this sp for exact search on based on parameter. --select * from Training_TRNS --USP_SearchEmployee '','2008-04-18 00:00:00.000','','','','','' alter Procedure USP_SearchEmployee @EmpName varchar(100)=null, @DateFrom varchar(100)=null, @DateTo varchar(100)=null, @CourseName varchar(100)=null, @JobFunction varchar(100)=null, @Region varchar(100)=null, @Status varchar(100)=null AS BEGIN if (@EmpName!='' and @EmpName is not null) BEGIN select EmpName,convert(varchar,DueDate,101) as DueDate,SpeCourse_ID as CourseName, EmpJobFunction as JOBFunction,EmpRegion as Region,Status from Training_TRNS where EmpName like '%'+@EmpName+'%' END ELSE IF (@CourseName!='' and @CourseName is not null) BEGIN select EmpName,convert(varchar,DueDate,101) as DueDate,SpeCourse_ID as CourseName, EmpJobFunction as JOBFunction,EmpRegion as Region,Status from Training_TRNS where SpeCourse_ID like '%'+@CourseName+'%' END ELSE IF (@JobFunction!='' and @JobFunction is not null) BEGIN select EmpName,convert(varchar,DueDate,101) as DueDate,SpeCourse_ID as CourseName, EmpJobFunction as JOBFunction,EmpRegion as Region,Status from Training_TRNS where EmpJobFunction like '%'+@JobFunction+'%' END ELSE IF (@Region!='' and @Region is not null) BEGIN select EmpName,convert(varchar,DueDate,101) as DueDate,SpeCourse_ID as CourseName, EmpJobFunction as JOBFunction,EmpRegion as Region,Status from Training_TRNS where EmpRegion like '%'+@Region+'%' END ELSE IF (@Status!='' and @Status is not null) BEGIN select EmpName,convert(varchar,DueDate,101) as DueDate,SpeCourse_ID as CourseName, EmpJobFunction as JOBFunction,EmpRegion as Region,Status from Training_TRNS where Status like '%'+@Status+'%' END ELSE IF (@DateFrom!='' and @DateFrom is not null) BEGIN select EmpName,convert(varchar,DueDate,101) as DueDate,SpeCourse_ID as CourseName, EmpJobFunction as JOBFunction,EmpRegion as Region,Status from Training_TRNS where convert(varchar,DueDate,101) like '%'+convert(varchar,@DateFrom,101)+'%' END Else BEGIN select EmpName,convert(varchar,DueDate,101) as DueDate,SpeCourse_ID as CourseName, EmpJobFunction as JOBFunction,EmpRegion as Region,Status from Training_TRNS END END

    Read the article

  • Rails 3 fields_for agressive loading?

    - by Seth
    Hi all, I'm trying to optimize (limit) queries in a view. I am using the fields_for function. I need to reference various properties of the object, such as username for display purposes. However, this is a rel table, so I need to join with my users table. The result is N sub-queries, 1 for each field in fields_for. It's difficult to explain, but I think you'll understand what I'm asking if I paste my code: <%= form_for @election do |f| %> <%= f.fields_for :voters do |voter| %> <%= voter.hidden_field :id %> <%= voter.object.user.preferred_name %> <% end %> <% end %> I have like 10,000 users, and many times each election will include all 10,000 users. That's 10,000 subqueries every time this view is loaded. I want fields_for to JOIN on users. Is this possible? I'd like to do something like: ... <%= f.fields_for :voters, :joins => :users do |voter| %> ... <% end %> ... But that, of course, doesn't work :(

    Read the article

  • postgres subquery w/ derived column

    - by Wells
    The following query won't work, but it should be clear what I'm trying to do: split the value of 't' on space and use the last element in that array in the subquery (as it will match tl). Any ideas how to do this? Thanks! SELECT t, y, "type", regexp_split_to_array(t, ' ') as t_array, sum(dr), ( select uz from f.tfa where tl = t_array[-1] ) as uz, sc FROM padres.yd_fld WHERE y = 2010 AND pos <> 0 GROUP BY t, y, "type", sc;

    Read the article

  • Can't delete record via the datacontext it was retrieved from

    - by Antilogic
    I just upgraded one of my application's methods to use compiled queries (not sure if this is relevant). Now I'm getting contradicting error messages when I run the code. This is my method: MyClass existing = Queries.MyStaticCompiledQuery(MyRequestScopedDataContext, param1, param2).SingleOrDefault(); if (existing != null) { MyRequestScopedDataContext.MyClasses.DeleteOnSubmit(existing); } When I run it I get this message: Cannot remove an entity that has not been attached. Note that the compiled query and the DeleteOnSubmit reference the same DataContext. Still I figured I'd humor the application and add an attach command before the DeleteOnSubmit, like so: MyClass existing = Queries.MyStaticCompiledQuery(MyRequestScopedDataContext, param1, param2).SingleOrDefault(); if (existing != null) { MyRequestScopedDataContext.MyClasses.Attach(existing); MyRequestScopedDataContext.MyClasses.DeleteOnSubmit(existing); } BUT... When I run this code, I get a completely different contradictory error message: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported. I'm at a complete loss... Does anyone else have some insight as to why I can't delete a record via the same DataContext I retrieved it from?

    Read the article

  • mysql reference result from subquery

    - by iamrohitbanga
    this is what i am doing update t1 set x=a,y=b where a and b are obtained from (select query here) i know the select query the select query returns multiple results which are the same when i use group by or distinct query execution slows down considerably a and b are forward references so mysql reports an error i want to set a equal to the value obtained in the first row and b equal to the value obtained in the first row for the respective columns, to avoid group by. i don't know how to refer to the first result from the select query. how can i achieve all this?

    Read the article

  • Eclipselink and update trigger on multiple access to the database

    - by Raven
    Hi, in my project I have a database which many clients connect to. Concurrent access and writing works well. The problem now is not to reload the data every second from the database to always have the current status of the data. Does Eclipselink provide a trigger mechanism on (automatically?) reload the data if the database is changed? How would one use this trigger? Thanks!

    Read the article

  • Evaluate column value into rows

    - by Hugo Palma
    I have a column whose value is a json array. For example: [{"att1": "1", "att2": "2"}, {"att1": "3", "att2": "4"}, {"att1": "5", "att2": "6"}] What i would like is to provide a view where each element of the json array is transformed into a row and the attributes of each json object into columns. Keep in mind that the json array doesn't have a fixed size. Any ideas on how i can achieve this ?

    Read the article

  • Having different database sorting order (default_scope) for two different views

    - by Juniper747
    In my model (pins.rb), I have two sorting orders: default_scope order: 'pins.featured DESC' #for adding featured posts to the top of a list default_scope order: 'pins.created_at DESC' #for adding the remaining posts beneath the featured posts This sorting order (above) is how I want my 'pins view' (index.html.erb) to look. Which is just a list of ALL user posts. In my 'users view' (show.html.erb) I am using the same model (pins.rb) to list only current_user pins. HOWEVER, I want to sorting order to ignore the "featured" default scope and only use the second scope: default_scope order: 'pins.created_at DESC' How can I accomplish this? I tried doing something like this: default_scope order: 'pins.featured DESC', only: :index default_scope order: 'pins.created_at DESC' But that didn't fly... UPDATE I updated my model to define a scope: scope :featy, order: 'pins.featured DESC' default_scope order: 'pins.created_at DESC' And updated my pins view to: <%= render @pins.featy %> However, now when I open my pins view, I get the error: undefined method `featy' for #<Array:0x00000100ddbc78> UPDATE 2 User.rb class User < ActiveRecord::Base attr_accessible :name, :email, :username, :password, :password_confirmation, :avatar, :password_reset_token, :password_reset_sent_at has_secure_password has_many :pins, dependent: :destroy #destroys user posts when user is destroyed # has_many :featured_pins, order: 'featured DESC', class_name: "Pin", source: :pin has_attached_file :avatar, :styles => { :medium => "300x300#", :thumb => "120x120#" } before_save { |user| user.email = user.email.downcase } before_save { |user| user.username = user.username.downcase } before_save :create_remember_token before_save :capitalize_name validates :name, presence: true, length: { maximum: 50 } VALID_EMAIL_REGEX = /\A[\w+\-.]+@[a-z\d\-.]+\.[a-z]+\z/i VALID_USERNAME_REGEX = /^[A-Za-z0-9]+(?:[_][A-Za-z0-9]+)*$/ validates :email, presence: true, format: { with: VALID_EMAIL_REGEX }, uniqueness: { case_sensitive: false } validates :username, presence: true, format: { with: VALID_USERNAME_REGEX }, uniqueness: { case_sensitive: false } validates :password, length: { minimum: 6 }, on: :create #on create, because was causing erros on pw_reset Pin.rb class Pin < ActiveRecord::Base attr_accessible :content, :title, :privacy, :date, :dark, :bright, :fragmented, :hashtag, :emotion, :user_id, :imagesource, :imageowner, :featured belongs_to :user before_save :capitalize_title before_validation :generate_slug validates :content, presence: true, length: { maximum: 8000 } validates :title, presence: true, length: { maximum: 24 } validates :imagesource, presence: { message: "Please search and choose an image" }, length: { maximum: 255 } validates_inclusion_of :privacy, :in => [true, false] validates :slug, uniqueness: true, presence: true, exclusion: {in: %w[signup signin signout home info privacy]} # for sorting featured and newest posts first default_scope order: 'pins.created_at DESC' scope :featured_order, order: 'pins.featured DESC' def to_param slug # or "#{id}-#{name}".parameterize end def generate_slug # makes the url slug address bar freindly self.slug ||= loop do random_token = Digest::MD5.hexdigest(Time.zone.now.to_s + title)[0..9]+"-"+"#{title}".parameterize break random_token unless Pin.where(slug: random_token).exists? end end protected def capitalize_title self.title = title.split.map(&:capitalize).join(' ') end end users_controller.rb class UsersController < ApplicationController before_filter :signed_in_user, only: [:edit, :update, :show] before_filter :correct_user, only: [:edit, :update, :show] before_filter :admin_user, only: :destroy def index if !current_user.admin? redirect_to root_path end end def menu @user = current_user end def show @user = User.find(params[:id]) @pins = @user.pins current_user.touch(:last_log_in) #sets the last log in time if [email protected]? render 'pages/info/' end end def new @user = User.new end pins_controller.rb class PinsController < ApplicationController before_filter :signed_in_user, except: [:show] # GET /pins, GET /pins.json def index #Live Feed @pins = Pin.all @featured_pins = Pin.featured_order respond_to do |format| format.html # index.html.erb format.json { render json: @pins } end end # GET /pins, GET /pins.json def show #single Pin View @pin = Pin.find_by_slug!(params[:id]) require 'uri' #this gets the photo's id from the stored uri @image_id = URI(@pin.imagesource).path.split('/').second if @pin.privacy == true #check for private pins if signed_in? if @pin.user_id == current_user.id respond_to do |format| format.html # show.html.erb format.json { render json: @pin } end else redirect_to home_path, notice: "Prohibited 1" end else redirect_to home_path, notice: "Prohibited 2" end else respond_to do |format| format.html # show.html.erb format.json { render json: @pin } end end end # GET /pins, GET /pins.json def new @pin = current_user.pins.new respond_to do |format| format.html # new.html.erb format.json { render json: @pin } end end # GET /pins/1/edit def edit @pin = current_user.pins.find_by_slug!(params[:id]) end Finally, on my index.html.erb I have: <%= render @featured_pins %>

    Read the article

  • Can a primary key be equal to a different column?

    - by eric
    I know that a primary key must be unique, but is it okay for a primary key to be equal to a different column in the same table by coincidence? For instance, I have 2 tables. One table is called person that holds information about a person (ID, email, telephone, address, name). The other table is staff (ID, pID(person ID), salary, position). In staff the ID column is the primary key and is used to uniquely identify a staff member. The number is from 1 - 100. However, the pID (person ID) may be equal to the ID. For instance the staff ID may be 1 and the pID that it references to may be equal to 1. Is that okay?

    Read the article

  • Only show certain items in a mysql database using php and time()

    - by Jon
    Is there a way to only show the items that are older than a certain date in a php mysql array? I'm using this query method and sorting by lastpost_cl: $query="SELECT * FROM properties ORDER BY lastpost_cl"; $result=mysql_query($query); $num = mysql_num_rows ($result); mysql_close(); and I was thinking the way of checking the time would be: if (time() <= strtotime($lastpost_cl)) { } How can I combine the two and only show lastpost_cl that are older that the current time?

    Read the article

  • Looking for MSSQL Table Design Sanity Check for Profile Tables with Dynamic Columns.

    - by Code Sherpa
    I just want a general sanity check regarding database design. We are building a web system that has both Teachers and Students. Both have accounts in the system. Both have profiles in the system. My question is about the table design of those Profile tables. The Teacher profile is pretty static regarding the metadata associated with it. Each teacher has a set number of fields that exposes information about that individual (schools, degrees, etc). The students, however, are a different case. We are using a windows service to pull varying data about the students from an endless stream of excel spreadsheets. The data gets moved into our database and then the fields appear in association with the student's profile. Accordingly, each and every student may have very different fields in their profile. I originally started with the concept of three tables: Accounts ---------- AccountID TeacherProfiles ---------- TeacherProfileID AccountID SecondarySchool University YearsTeaching Etc... StudentProfiles ---------- StudentProfileID AccountID Header Value The StudentProfiles table would hold the name of the column headers from the excel spreadsheets and the associated values. I have since evolved the design a little to treat Profiles more generically per the attached ERD image. The Teacher and Student "Headers" are stored in a table called "ProfileAttributeTypes" and responses (either from the excel document or via input fields on the web form) are put in a ProfileAttributes table. This way both Student and Teacher profiles can be associated with a dynamic flow of profile fields. The "Permissions" table tells us whether we are dealing with a Student or a Teacher. Since this system is likely to grow quickly, I want to make sure the foundation is solid. Can you please provide feedback about this design and let me know if it seems sound or if you could see problems it might create and, if so, what might be a better approach? Thanks in advance.

    Read the article

  • How to avoid geometric slowdown with large Linq transactions?

    - by Shaul
    I've written some really nice, funky libraries for use in LinqToSql. (Some day when I have time to think about it I might make it open source... :) ) Anyway, I'm not sure if this is related to my libraries or not, but I've discovered that when I have a large number of changed objects in one transaction, and then call DataContext.GetChangeSet(), things start getting reaalllly slooowwwww. When I break into the code, I find that my program is spinning its wheels doing an awful lot of Equals() comparisons between the objects in the change set. I can't guarantee this is true, but I suspect that if there are n objects in the change set, then the call to GetChangeSet() is causing every object to be compared to every other object for equivalence, i.e. at best (n^2-n)/2 calls to Equals()... Yes, of course I could commit each object separately, but that kinda defeats the purpose of transactions. And in the program I'm writing, I could have a batch job containing 100,000 separate items, that all need to be committed together. Around 5 billion comparisons there. So the question is: (1) is my assessment of the situation correct? Do you get this behavior in pure, textbook LinqToSql, or is this something my libraries are doing? And (2) is there a standard/reasonable workaround so that I can create my batch without making the program geometrically slower with every extra object in the change set?

    Read the article

  • How efficient is a details table?

    - by Jeffrey Lott
    At my job, we have pseudo-standard of creating one table to hold the "standard" information for an entity, and a second table, named like 'TableNameDetails', which holds optional data elements. On average, for every row in the main table will have about 8-10 detail rows in it. My question is: What kind of performance impacts does this have over adding these details as additional nullable columns on the main table?

    Read the article

  • Clustered index on frequently changing reference table of one or more foreign keys

    - by Ian
    My specific concern is related to the performance of a clustered index on a reference table that has many rapid inserts and deletes. Table 1 "Collection" collection_pk int (among other fields) Table 2 "Item" item_pk int (among other fields) Reference Table "Collection_Items" collection_pk int, item_pk int (combined primary key) Because the primary key is composed of both pks, a clustered index is created and the data physically ordered in the table according to the combined keys. I have many users creating and deleting collections and adding and removing items to those collections very frequently affecting the "Collection_Items" table, and its clustered index. QUESTION PART: Since the "Collection_Items" table is so dynamic, wouldn't there be a big performance hit on constantly resorting the table rows because of the clustered index ? If yes, what should I do to minimize this ?

    Read the article

< Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >