Search Results

Search found 5380 results on 216 pages for 'primary'.

Page 182/216 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • Extracting DCT coefficients from encoded images and video

    - by misha
    Is there a way to easily extract the DCT coefficients (and quantization parameters) from encoded images and video? Any decoder software must be using them to decode block-DCT encoded images and video. So I'm pretty sure the decoder knows what they are. Is there a way to expose them to whomever is using the decoder? I'm implementing some video quality assessment algorithms that work directly in the DCT domain. Currently, the majority of my code uses OpenCV, so it would be great if anyone knows of a solution using that framework. I don't mind using other libraries (perhaps libjpeg, but that seems to be for still images only), but my primary concern is to do as little format-specific work as possible (I don't want to reinvent the wheel and write my own decoders). I want to be able to open any video/image (H.264, MPEG, JPEG, etc) that OpenCV can open, and if it's block DCT-encoded, to get the DCT coefficients. In the worst case, I know that I can write up my own block DCT code, run the decompressed frames/images through it and then I'd be back in the DCT domain. That's hardly an elegant solution, and I hope I can do better. Presently, I use the fairly common OpenCV boilerplate to open images: IplImage *image = cvLoadImage(filename); // Run quality assessment metric The code I'm using for video is equally trivial: CvCapture *capture = cvCaptureFromAVI(filename); while (cvGrabFrame(capture)) { IplImage *frame = cvRetrieveFrame(capture); // Run quality assessment metric on frame } cvReleaseCapture(&capture); In both cases, I get a 3-channel IplImage in BGR format. Is there any way I can get the DCT coefficients as well?

    Read the article

  • ASP.MVC 2 Model Data Persistance

    - by toccig
    I'm and MVC1 programmer, new to the MVC2. The data will not persist to the database in an edit scenario. Create works fine. Controller: // // POST: /Attendee/Edit/5 [Authorize(Roles = "Admin")] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(Attendee attendee) { if (ModelState.IsValid) { UpdateModel(attendee, "Attendee"); repository.Save(); return RedirectToAction("Details", attendee); } else { return View(attendee); } } Model: [MetadataType(typeof(Attendee_Validation))] public partial class Attendee { } public class Attendee_Validation { [HiddenInput(DisplayValue = false)] public int attendee_id { get; set; } [HiddenInput(DisplayValue = false)] public int attendee_pin { get; set; } [Required(ErrorMessage = "* required")] [StringLength(50, ErrorMessage = "* Must be under 50 characters")] public string attendee_fname { get; set; } [StringLength(50, ErrorMessage = "* Must be under 50 characters")] public string attendee_mname { get; set; } } I tried to add [Bind(Exclude="attendee_id")] above the Class declaration, but then the value of the attendee_id attribute is set to '0'. View (Strongly-Typed): <% using (Html.BeginForm()) {%> ... <%=Html.Hidden("attendee_id", Model.attendee_id) %> ... <%=Html.SubmitButton("btnSubmit", "Save") %> <% } %> Basically, the repository.Save(); function seems to do nothing. I imagine it has something to do with a primary key constraint violation. But I'm not getting any errors from SQL Server. The application appears to runs fine, but the data is never persisted to the Database.

    Read the article

  • iPhone or Android apps that use SMS based authentication?

    - by JSW
    What are some iPhone or Android applications that use SMS as their primary means of user authentication? I'm interested to see such apps in action. SMS-auth seems like a natural approach that is well-situated to mobile contexts. The basic workflow is: to sign up, a user provides a phone number; the app calls a backend webservice which generates a signed URL and sends it to the phone number via an SMS gateway; the user receives the SMS, clicks the link, and is thus verified and logged in. This results in a very strong user identity that is difficult to spoof yet fairly easy. It can be paired with a username or additional account attributes as needed for the product requirements. Despite the advantages, this does not seem to be in much use - hence my question. My initial assumption is that this is because products and users are wary of asking for / providing phone numbers, which users consider sensitive information. That said, I hope this becomes an increasingly more commonplace approach.

    Read the article

  • Updating database row from model

    - by Jamie Dixon
    Hey everyone, I'm haing a few problems updating a row in my database using Linq2Sql. Inside of my model I have two methods for updating and saving from my controller, which in turn receives an updated model from my view. My model methods like like: public void Update(Activity activity) { _db.Activities.InsertOnSubmit(activity); } public void Save() { _db.SubmitChanges(); } and the code in my Controller likes like: [HttpPost] public ActionResult Edit(Activity activity) { if (ModelState.IsValid) { UpdateModel<Activity>(activity); _activitiesModel.Update(activity); _activitiesModel.Save(); } return View(activity); } The problem I'm having is that this code inserts a new entry into the database, even though the model item i'm inserting-on-submit contains a primary key field. I've also tried re-attaching the model object back to the data source but this throws an error because the item already exists. Any pointers in the right direction will be greatly appreciated. UPDATE: I'm using dependancy injection to instantiate my datacontext object as follows: IMyDataContext _db; public ActivitiesModel(IMyDataContext db) { _db = db; }

    Read the article

  • NHibernate class referencing discriminator based subclass

    - by Rich
    I have a generic class Lookup which contains code/value properties. The table PK is category/code. There are subclasses for each category of lookup, and I've set the discriminator column in the base class and its value in the subclass. See example below (only key pieces shown): public class Lookup { public string Category; public string Code; public string Description; } public class LookupClassMap { CompositeId() .KeyProperty(x = x.Category, "CATEGORY_ID") .KeyProperty(x = x.Code, "CODE_ID"); DiscriminateSubclassesBasedOnColumn("CATEGORY_ID"); } public class MaritalStatus: Lookup {} public class MartialStatusClassMap: SubclassMap { DiscriminatorValue(13); } This all works. Here's the problem. When a class has a property of type MaritalStatus, I create a reference based on the contained code ID column ("MARITAL_STATUS_CODE_ID"). NHibernate doesn't like it because I didn't map both primary key columns (Category ID & Code ID). But with the Reference being of type MaritalStatus, NHibernate should already know what the value of the category ID is going to be, because of the discriminator value. What am I missing?

    Read the article

  • design pattern for related inputs

    - by curiousMo
    My question is a design question : let's say i have a data entry web page with 4 drop down lists, each depending on the previous one, and a bunch of text boxes. ddlCountry (DropDownList) ddlState (DropDownList) ddlCity (DropDownList) ddlBoro (DropDownList) txtAddress (TxtBox) txtZipcode(TxtBox) and an object that represents a datarow with a value for each: countrySeqid stateSeqid citySeqid boroSeqid address zipCode naturally the country, state, city and boro values will be values of primary keys of some lookup tables. when the user chooses to edits that record, i would load it from database and load it into the page. the issue that I have is how to streamline loading the DropDownLists. i have some code that would grab the object,look thru its values and move them to their corresponding input controls in one shot. but in this case i will have to load the ddlCountry with possible values, then assign values, then do the same thing for the rest of the ddls. I guess i am looking for an elegant solution. i am using asp.net, but i think it is irrelevant to the question. i am looking more into a design pattern.

    Read the article

  • How can I concisely copy multiple SQL rows, with minor modifications?

    - by Steve Jessop
    I'm copying a subset of some data, so that the copy will be independently modifiable in future. One of my SQL statements looks something like this (I've changed table and column names): INSERT Product( ProductRangeID, Name, Weight, Price, Color, And, So, On ) SELECT @newrangeid AS ProductRangeID, Name, Weight, Price, Color, And, So, On FROM Product WHERE ProductRangeID = @oldrangeid and Color = 'Blue' That is, we're launching a new product range which initially just consists of all the blue items in some specified current range, under new SKUs. In future we may change the "blue-range" versions of the products independently of the old ones. I'm pretty new at SQL: is there something clever I should do to avoid listing all those columns, or at least avoid listing them twice? I can live with the current code, but I'd rather not have to come back and modify it if new columns are added to Product. In its current form it would just silently fail to copy the new column if I forget to do that, which should show up in testing but isn't great. I am copying every column except for the ProductRangeID (which I modify), the ProductID (incrementing primary key) and two DateCreated and timestamp columns (which take their auto-generated values for the new row). Btw, I suspect I should probably have a separate join table between ProductID and ProductRangeID. I didn't define the tables. This is in a T-SQL stored procedure on SQL Server 2008, if that makes any difference.

    Read the article

  • FluentNHibernate error -- "Invalid object name"

    - by goober
    I'm attempting to do the most simple of mappings with FluentNHibernate & Sql2005. Basically, I have a database table called "sv_Categories". I'd like to add a category, setting the ID automatically, and adding the userid and title supplied. Database table layout: CategoryID -- int -- not-null, primary key, auto-incrementing UserID -- uniqueidentifier -- not null Title -- varchar(50) -- not null Simple. My SessionFactory code (which works, as far as I can tell): _SessionFactory = Fluently.Configure().Database( MsSqlConfiguration.MsSql2005 .ConnectionString(c => c.FromConnectionStringWithKey("SVTest"))) .Mappings(x => x.FluentMappings.AddFromAssemblyOf<CategoryMap>()) .BuildSessionFactory(); My ClassMap code: public class CategoryMap : ClassMap<Category> { public CategoryMap() { Id(x => x.ID).Column("CategoryID").Unique(); Map(x => x.Title).Column("Title").Not.Nullable(); Map(x => x.UserID).Column("UserID").Not.Nullable(); } } My Class code: public class Category { public virtual int ID { get; private set; } public virtual string Title { get; set; } public virtual Guid UserID { get; set; } public Category() { // do nothing } } And the page where I save the object: public void Add(Category catToAdd) { using (ISession session = SessionProvider.GetSession()) { using (ITransaction Transaction = session.BeginTransaction()) { session.Save(catToAdd); Transaction.Commit(); } } } I receive the error Invalid object name 'Category'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Invalid object name 'Category'. I think it might be that I haven't told the CategoryMap class to use the "sv_Categories" table, but I'm not sure how to do that. Any help would be appreciated. Thanks!

    Read the article

  • Paperclip: Stay put on edit

    - by EricR
    When a user edits something in my application, they're forced to re-upload their image via paperclip even if they aren't changing it. Failing to do so will cause an error, since I validate_presence_of :image. This is quite annoying. How can I make it so Paperclip won't update its attributes if a user simply doesn't supply a new image on an edit? The photo controller is fresh out of Rails' scaffold generator. The rest of the source code is provided below. models/accommodation.rb class Accommodation < ActiveRecord::Base attr_accessible :photo validates_presence_of :photo has_one :photo has_many :notifications belongs_to :user accepts_nested_attributes_for :photo, :allow_destroy => true end controllers/accommodation_controller.rb class AccommodationsController < ApplicationController def index @accommodations = Accommodation.all end def show @accommodation = Accommodation.find(params[:id]) rescue ActiveRecord::RecordNotFound flash[:error] = "Accommodation not found." redirect_to :home end def new @accommodation = current_user.accommodations.build @accommodation.build_photo end def create @accommodation = current_user.accommodations.build(params[:accommodation]) if @accommodation.save flash[:notice] = "Successfully created your accommodation." redirect_to @accommodation else @accommodation.build_photo render :new end end def edit @accommodation = Accommodation.find(params[:id]) @accommodation.build_photo rescue ActiveRecord::RecordNotFound flash[:error] = "Accommodation not found." redirect_to :home end def update @accommodation = Accommodation.find(params[:id]) if @accommodation.update_attributes(params[:accommodation]) flash[:notice] = "Successfully updated accommodation." redirect_to @accommodation else @accommodation.build_photo render :edit end end def destroy @accommodation = Accommodation.find(params[:id]) @accommodation.destroy flash[:notice] = "Successfully destroyed accommodation." redirect_to :inkeep end end models/photo.rb class Photo < ActiveRecord::Base attr_accessible :image, :primary belongs_to :accommodation has_attached_file :image, :styles => { :thumb=> "100x100#", :small => "150x150>" } end

    Read the article

  • iPhone - Using sql database - insert statement failing

    - by Satyam svv
    Hi, I'm using sqlite database in my iphone app. I've a table which has 3 integer columns. I'm using following code to write to that database table. -(BOOL)insertTestResult { NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString* documentsDirectory = [paths objectAtIndex:0]; NSString* dataBasePath = [documentsDirectory stringByAppendingPathComponent:@"test21.sqlite3"]; BOOL success = NO; sqlite3* database = 0; if(sqlite3_open([dataBasePath UTF8String], &database) == SQLITE_OK) { BOOL res = (insertResultStatement == nil) ? createStatement(insertResult, &insertResultStatement, database) : YES; if(res) { int i = 1; sqlite3_bind_int(insertResultStatement, 0, i); sqlite3_bind_int(insertResultStatement, 1, i); sqlite3_bind_int(insertResultStatement, 2, i); int err = sqlite3_step(insertResultStatement); if(SQLITE_ERROR == err) { NSAssert1(0, @"Error while inserting Result. '%s'", sqlite3_errmsg(database)); success = NO; } else { success = YES; } sqlite3_finalize(insertResultStatement); insertResultStatement = nil; } } sqlite3_close(database); return success;} The command sqlite3_step is always giving err as 19. I'm not able to understand where's the issue. Tables are created using following queries: CREATE TABLE [Patient] (PID integer NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,PFirstName text NOT NULL,PLastName text,PSex text NOT NULL,PDOB text NOT NULL,PEducation text NOT NULL,PHandedness text,PType text) CREATE TABLE PatientResult(PID INTEGER,PFreeScore INTEGER NOT NULL,PForcedScore INTEGER NOT NULL,FOREIGN KEY (PID) REFERENCES Patient(PID)) I've only one entry in Patient table with PID = 1 BOOL createStatement(const char* query, sqlite3_stmt** stmt, sqlite3* database){ BOOL res = (sqlite3_prepare_v2(database, query, -1, stmt, NULL) == SQLITE_OK); if(!res) NSLog( @"Error while creating %s => '%s'", query, sqlite3_errmsg(database)); return res;}

    Read the article

  • Questions about openid and dotnetauthentication

    - by chobo2
    Hi I am looking into openid and dotnetauthentication library. However I still go some outstanding questions. the id that comes back is that unique for each user? Can I store this id in a database as the userId(currently this field is a primary key and unique identifier) I read that you can try to request information such as email address but you may not give it to you. What happens if you need this information? I think it kinda sucks if I have to popup another field right away and ask for their email address and whatever else fields I need. Sort of seems to defeat the purpose a bit as I always considered a benefit of openid is that you don't have to fill out registration forms. Is it better to only have some predefined choices(google,yahoo,openid,facebook). Then letting them type in their own ones(ie gray out the field to let them type in a url). I am thinking of this because it goes back to point number 2 if they type in a provider that does not give me the information that I need I am then stuck. How do you a log person out? Do you just kill the form authentication ticket?

    Read the article

  • How to make ActiveRecord work with legacy partitioned/sharded databases/tables?

    - by Utensil
    thanks for your time first...after all the searching on google, github and here, and got more confused about the big words(partition/shard/fedorate),I figure that I have to describe the specific problem I met and ask around. My company's databases deals with massive users and orders, so we split databases and tables in various ways, some are described below: way database and table name shard by (maybe it's should be called partitioned by?) YZ.X db_YZ.tb_X order serial number last three digits YYYYMMDD. db_YYYYMMDD.tb date YYYYMM.DD db_YYYYMM.tb_ DD date too The basic concept is that databases and tables are seperated acording to a field(not nessissarily the primary key), and there are too many databases and too many tables, so that writing or magically generate one database.yml config for each database and one model for each table isn't possible or at least not the best solution. I looked into drnic's magic solutions, and datafabric, and even the source code of active record, maybe I could use ERB to generate database.yml and do database connection in around filter, and maybe I could use named_scope to dynamically decide the table name for find, but update/create opertions are bounded to "self.class.quoted_table_name" so that I couldn't easily get my problem solved. And even I could generate one model for each table, because its amount is up to 30 most. But this is just not DRY! What I need is a clean solution like the following DSL: class Order < ActiveRecord::Base shard_by :order_serialno do |key| [get_db_config_by(key), #because some or all of the databaes might share the same machine in a regular way or can be configed by a hash of regex, and it can also be a const get_db_name_by(key), get_tb_name_by(key), ] end end Can anybody enlight me? Any help would be greatly appreciated~~~~

    Read the article

  • PHP Pass Dynamic Array name to function

    - by Brad
    How do I pass an array key to a function to pull up the right key's data? // The array <?php $var['TEST1'] = Array ( 'Description' => 'This is a Description', 'Version' => '1.11', 'fields' => Array( 'ID' => array( 'type' => 'int', 'length' =>'11', 'misc' =>'auto_increment' ), 'DATA' => array( 'type' => 'varchar', ' length' => '255' ) ); $var['TEST2'] = Array ( 'Description' =? 'This is the 2nd Description', 'Version' => '2.1', 'fields' => Array( 'ID' => array( 'type' => 'int', 'length' =>'11', 'misc' =>'auto_increment' ), 'DATA' => array( 'type' => 'varchar', ' length' => '255' ) ) // The function <?php $obj = 'TEST1'; print_r($schema[$obj]); // <-- Fives me output. But calling the function doesn't. echo buildStructure($obj); /** * @TODO to add auto_inc support */ function buildStructure($obj) { $output = ''; $primaryKey = $schema["{$obj}"]['primary key']; foreach ($schema["{$obj}"]['fields'] as $name => $tag) // #### ERROR #### Invalid argument supplied for foreach() { $type = $tag['type']; $length = $tag['length']; $default = $tag['default']; $description = $tag['description']; $length = (isset($length)) ? "({$length})" : ''; $default = ($default == NULL ) ? "NULL" : $default; $output .= "`{$name}` {$type}{$length} DEFAULT {$default} COMMENT `{$DESCRIPTION}`, "; } return $output; }

    Read the article

  • How to duplicate all data in a table except for a single column that should be changed.

    - by twiga
    I have a question regarding a unified insert query against tables with different data structures (Oracle). Let me elaborate with an example: tb_customers ( id NUMBER(3), name VARCHAR2(40), archive_id NUMBER(3) ) tb_suppliers ( id NUMBER(3), name VARCHAR2(40), contact VARCHAR2(40), xxx, xxx, archive_id NUMBER(3) ) The only column that is present in all tables is [archive_id]. The plan is to create a new archive of the dataset by copying (duplicating) all records to a different database partition and incrementing the archive_id for those records accordingly. [archive_id] is always part of the primary key. My problem is with select statements to do the actual duplication of the data. Because the columns are variable, I am struggling to come up with a unified select statement that will copy the data and update the archive_id. One solution (that works), is to iterate over all the tables in a stored procedure and do a: CREATE TABLE temp as (SELECT * from ORIGINAL_TABLE); UPDATE temp SET archive_id=something; INSERT INTO temp (select * from temp); DROP TABLE temp; I do not like this solution very much as the DDL commands muck up all restore points. Does anyone else have any solution?

    Read the article

  • Strange data swapping error occurs when I attempt to update rows in my table from another table in m

    - by Wesley
    So I have a table of data that is 10,000 lines long. Several of the columns in the table simply describe information about one of the columns, meaning, that only one column has the content, and the rest of the columns describe the location of the content (its for a book). Right now, only 6,000 of the 10,000 rows' content column is filled with its content. Rows 6-10,000's content column simply says null. I have another table in the db that has the content for rows 6,000-10,000, with the correct corresponding primary key which would (seemingly) make it easy to update the 10,000 row table. I have been trying an update query such as the following: UPDATE table(10,000) SET content_column = (SELECT content FROM table(6,000-10,000) WHERE table(10,000).id = table(6-10,000.id) Which kind of works, the only problem is that it pulls in the data from the second table just fine, but it replaces the existing content column with null. So rows 1-6,000's content column become null, and rows 6-10,000's content column have the correct values...Pretty strange I thought anyway. Does anybody have any thoughts about where I am going wrong? If you could show me a better sql query, I would appreciate it! Thanks

    Read the article

  • Rails : can't write unknown attribute `url'

    - by user2954789
    I am new to ruby on rails,and I am learning by creating a blog. I am not able to save to my blogs table and I get this error "can't write unknown attribute url" Blogs migration :db/migrate/ class CreateBlogs < ActiveRecord::Migration --def change --- create_table :blogs do |t| ---- t.string :title ---- t.text :description ---- t.string :slug ---- t.timestamps --- end --end end Blogs Model :/app/models/blogs.rb class Blogs < ActiveRecord::Base --acts_as_url :title --def to_param ---url --end --validates :title, presence:true end Blogs Controller : /app/controllers/blogs_controller.rb class BlogsController < ApplicationController before_action :require_login --def new --- @blogs = Blogs.new --end --def show ---@blogs = Blogs.find(params[:id]) --end --def create ---@blogs = Blogs.new(blogs_params) --if @blogs.save ---flash[:success] = "Your Blog has been created." ---redirect_to home_path --else ---render 'new' --end -end --def blogs_params ---params.require(:blogs).permit(:title,:description) --end private --def require_login ---unless signed_in? ----flash[:error] = "You must be logged in to create a new Blog" ----redirect_to signin_path ---end --end end Blogs Form:/app/views/blogs/new.html.erb Blockquote <%= form_for @blogs, url: blogs_path do |f| %><br/> <%= render 'shared/error_messages_blogs' %><br/> <%= f.label :title %><br/> <%= f.text_field :title %><br/> <%= f.label :description %><br/> <%= f.text_area :description %><br/> <%= f.submit "Submit Blog", class: "btn btn-large btn-primary" %><br/> <% end %><br/> and I have also added "resources :blogs" to my routes.rb file. I get this error in controller at if @blogs.save

    Read the article

  • Slow query. Wrong database structure?

    - by Tin
    I have a database with table that contains tasks. Tasks have a lifecycle. The status of the task's lifecycle can change. These state transitions are stored in a separate table tasktransitions. Now I wrote a query to find all open/reopened tasks and recently changed tasks but I already see with a rather small number of tasks (<1000) that execution time has becoming very long (0.5s). Tasks +-------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+---------+------+-----+---------+----------------+ | taskid | int(11) | NO | PRI | NULL | auto_increment | | description | text | NO | | NULL | | +-------------+---------+------+-----+---------+----------------+ Tasktransitions +------------------+-----------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+-----------+------+-----+-------------------+----------------+ | tasktransitionid | int(11) | NO | PRI | NULL | auto_increment | | taskid | int(11) | NO | MUL | NULL | | | status | int(11) | NO | MUL | NULL | | | description | text | NO | | NULL | | | userid | int(11) | NO | | NULL | | | transitiondate | timestamp | NO | | CURRENT_TIMESTAMP | | +------------------+-----------+------+-----+-------------------+----------------+ Query SELECT tasks.taskid,tasks.description,tasklaststatus.status FROM tasks LEFT OUTER JOIN ( SELECT tasktransitions.taskid,tasktransitions.transitiondate,tasktransitions.status FROM tasktransitions INNER JOIN ( SELECT taskid,MAX(transitiondate) AS lasttransitiondate FROM tasktransitions GROUP BY taskid ) AS tasklasttransition ON tasklasttransition.lasttransitiondate=tasktransitions.transitiondate AND tasklasttransition.taskid=tasktransitions.taskid ) AS tasklaststatus ON tasklaststatus.taskid=tasks.taskid WHERE tasklaststatus.status IS NULL OR tasklaststatus.status=0 or tasklaststatus.transitiondate>'2013-09-01'; I'm wondering if the database structure is best choice performance wise. Could adding indexes help? I already tried to add some but I don't see great improvements. +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | tasktransitions | 0 | PRIMARY | 1 | tasktransitionid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 1 | taskid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 2 | transitiondate | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | status_ix | 1 | status | A | 3 | NULL | NULL | | BTREE | | | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ Any other suggestions?

    Read the article

  • Converting rows to Columns in SQL

    - by Ram
    I have a table (actually a view, but simplified my example to a table) which gives me some data like this id CompanyName website 1 Google google.com 2 Google google.net 3 Google google.org 4 Google google.in 5 Google google.de 6 Microsoft Microsoft.com 7 Microsoft live.com 8 Microsoft bing.com 9 Microsoft hotmail.com I am looking to convert it to get a result like this CompanyName website1 website2 website3 website 4 website5 website6 ----------- ------------- ---------- ---------- ----------- --------- -------- Google google.com google.net google.org google.in google.de NULL Microsoft Microsoft.com live.com bing.com hotmail.com NULL NULL I have looked into pivot but looks like the record(row values) cannot be dynamic (i.e can only be certain predefined values). Also, if there are more than 6 websites, I want to limit it to the first 6 Dynamic pivot makes sense, but I would have to incorporate it into my view ?? Is there a simpler solution for this ? Here are the SQL scripts CREATE TABLE [dbo].[Company]( [id] [int] NULL, [CompanyName] [varchar](50) NULL, [website] [varchar](50) NULL ) ON [PRIMARY] GO insert into company values (1,'Google','google.com') insert into company values (2,'Google','google.net') insert into company values (3,'Google','google.org') insert into company values (4,'Google','google.in') insert into company values (5,'Google','google.de') insert into company values (6,'Microsoft','Microsoft.com') insert into company values (7,'Microsoft','live.com') insert into company values (8,'Microsoft','bing.com') insert into company values (9,'Microsoft','hotmail.com')

    Read the article

  • @ManyToMany Duplicate Entry Exception

    - by zp26
    I have mapped a bidirectional many-to-many exception between the entities Course and Trainee in the following manner: Course { ... private Collection<Trainee> students; ... @ManyToMany(targetEntity = lesson.domain.Trainee.class, cascade = {CascadeType.All}, fetch = {FetchType.EAGER}) @Jointable(name="COURSE_TRAINEE", joincolumns = @JoinColumn(name="COURSE_ID"), inverseJoinColumns = @JoinColumn(name = "TRAINEE_ID")) @CollectionOfElements public Collection<Trainee> getStudents() { return students; } ... } Trainee { ... private Collection<Course> authCourses; ... @ManyToMany(cascade = {CascadeType.All}, fetch = {FetchType.EAGER}, mappedBy = "students", targetEntity = lesson.domain.Course.class) @CollectionOfElements public Collection<Course> getAuthCourses() { return authCourses; } ... } Instead of creating a table where the Primary Key is made of the two foreign keys (imported from the table of the related two entities), the system generates the table "COURSE_TRAINEE" with the following schema: I am working on MySQL 5.1 and my App. Server is JBoss 5.1. Does anyone guess why?

    Read the article

  • How to increment the string value during run time without using dr.read() and store it in the oledb

    - by sameer
    Here is my code, everything is working fine the only problem is with the ReadData() method in which i want the string value to be increment i.e AM0001,AM0002,AM0003 etc. This is happening but only one time the value is getting increment to AM0001, the second time the same value i.e AM0001 is getting return. Due to this i am getting a error from oledb because of AM0001 is a primary key field. enter code here: class Jewellery : Connectionstr { string lmcode = null; public string LM_code { get { return lmcode;} set { lmcode = ReadData();} } string mname; public string M_Name { get { return mname; } set { mname = value;} } string desc; public string Desc { get { return desc; } set { desc = value; } } public string ReadData() { string jid = string.Empty; string displayString = string.Empty; String query = "select max(LM_code)from Master_Accounts"; Datamanager.RunExecuteReader(Constr,query); if (string.IsNullOrEmpty(jid)) { jid = "AM0000";//This string value has to increment at every time, but it is getting increment only one time. } int len = jid.Length; string split = jid.Substring(2, len - 2); int num = Convert.ToInt32(split); num++; displayString = jid.Substring(0, 2) + num.ToString("0000"); return displayString; } public void add() { String query ="insert into Master_Accounts values ('" + LM_code + "','" + M_Name + "','" + Desc + "')"; Datamanager.RunExecuteNonQuery(Constr , query); } Any help will be appreciated.

    Read the article

  • How to insert several thousand columns into sqlite3?

    - by user291071
    Similar to my last question, but I ran into problem lets say I have a simple dictionary like below but its Big, when I try inserting a big dictionary using the methods below I get operational error for the c.execute(schema) for too many columns so what should be my alternate method to populate an sql databases columns? Using the alter table command and add each one individually? import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() dic = { 'x1':{'y1':1.0,'y2':0.0}, 'x2':{'y1':0.0,'y2':2.0,'joe bla':1.5}, 'x3':{'y2':2.0,'y3 45 etc':1.5} } # 1. Find the unique column names. columns = set() for _, cols in dic.items(): for key, _ in cols.items(): columns.add(key) # 2. Create the schema. col_defs = [ # Start with the column for our key name '"row_name" VARCHAR(2) NOT NULL PRIMARY KEY' ] for column in columns: col_defs.append('"%s" REAL NULL' % column) schema = "CREATE TABLE simple (%s);" % ",".join(col_defs) c.execute(schema) # 3. Loop through each row for row_name, cols in dic.items(): # Compile the data we have for this row. col_names = cols.keys() col_values = [str(val) for val in cols.values()] # Insert it. sql = 'INSERT INTO simple ("row_name", "%s") VALUES ("%s", "%s");' % ( '","'.join(col_names), row_name, '","'.join(col_values) )

    Read the article

  • How do I use the Enum value from a class in another part of code?

    - by ChiggenWingz
    Coming from a C# background from a night course at a local college, I've sort of started my way in C++. Having a lot pain getting use to the syntax. I'm also still very green when it comes to coding techniques. From my WinMain function, I want to be able to access a variable which is using an enum I declared in another class. (inside core.h) class Core { public: enum GAME_MODE { INIT, MENUS, GAMEPLAY }; GAME_MODE gameMode; Core(); ~Core(); ...OtherFunctions(); }; (inside main.cpp) Core core; int WINAPI WinMain(...) { ... startup code here... core.gameMode = Core.GAME_MODE.INIT; ...etc... } Basically I want to set that gameMode to the enum value of Init or something like that from my WinMain function. I want to also be able to read it from other areas. I get the error... expected primary-expression before '.' token If I try to use core.gameMode = Core::GAME_MODE.INIT;, then I get the same error. I'm not fussed about best practices, as I'm just trying to get the basic understanding of passing around variables in C++ between files. I'll be making sure variables are protected and neatly tucked away later on once I am use to the flexibility of the syntax. If I remember correctly, C# allowed me to use Enums from other classes, and all I had to do was something like Core.ENUMNAME.ENUMVALUE. I hope what I'm wanting to do is clear :\ As I have no idea what a lot of the correct terminology is.

    Read the article

  • Has anyone ever had OpenCV work with Python 2.7 on MacOS 10.6?

    - by ?????
    I've been trying on and off for the past 6 months to get OpenCV to work with Python on MacOS. Every time there's a new release, I try again and fail. I've tried both 64-bit and 32-bit, and both the xcode gcc and gcc installed via macports. I just spend the past two days on it, hopeful that the latest OpenCV release, that appears to include Python support directly would work. It doesn't. I've also tried and failed to use this: http://code.google.com/p/pyopencv/ I've been using OpenCV with C++ or Microsoft C++/CLI for the past few years, but I'd love to use it with Python on a Mac because that is my primary development environment. I'd love to hear from anyone who's actually been able to get the opencv python examples to run under Mac OS 10.6, either 32 or 64-bit. My last attempt was to follow the instructions on this page http://recursive-design.com/blog/2010/12/14/face-detection-with-osx-and-python/ with a clean, fresh install of 10.6 on a 64-bit capable Mac. My PYTHONPATH is set, and I can see the cv library in it. But an "import cv" from python fails. Previously, the closest I've ever gotten (again, staring on a clean, fresh 10.6 install) was this: Python 2.7.1 (r271:86882M, Nov 30 2010, 10:35:34) [GCC 4.2.1 (Apple Inc. build 5664)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import cv Fatal Python error: Interpreter not initialized (version mismatch?) Abort trap thrilllap-2:~ swirsky$ I've seen a lot of folks answering similar questions here, but have never seen an definitive answer for it.

    Read the article

  • Efficiently select top row for each category in the set

    - by VladV
    I need to select a top row for each category from a known set (somewhat similar to this question). The problem is, how to make this query efficient on the large number of rows. For example, let's create a table that stores temperature recording in several places. CREATE TABLE #t ( placeId int, ts datetime, temp int, PRIMARY KEY (ts, placeId) ) -- insert some sample data SET NOCOUNT ON DECLARE @n int, @ts datetime SELECT @n = 1000, @ts = '2000-01-01' WHILE (@n>0) BEGIN INSERT INTO #t VALUES (@n % 10, @ts, @n % 37) IF (@n % 10 = 0) SET @ts = DATEADD(hour, 1, @ts) SET @n = @n - 1 END Now I need to get the latest recording for each of the places 1, 2, 3. This way is efficient, but doesn't scale well (and looks dirty). SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 1 ORDER BY ts DESC ) t1 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 2 ORDER BY ts DESC ) t2 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 3 ORDER BY ts DESC ) t3 The following looks better but works much less efficiently (30% vs 70% according to the optimizer). SELECT placeId, ts, temp FROM ( SELECT placeId, ts, temp, ROW_NUMBER() OVER (PARTITION BY placeId ORDER BY ts DESC) rownum FROM #t WHERE placeId IN (1, 2, 3) ) t WHERE rownum = 1 The problem is, during the latter query execution plan a clustered index scan is performed on #t and 300 rows are retrieved, sorted, numbered, and then filtered, leaving only 3 rows. For the former query three times one row is fetched. Is there a way to perform the query efficiently without lots of unions?

    Read the article

  • AngularJS not validating email field in form

    - by idipous
    I have the html below where I have a form that I want to submit to the AngularJS Controller. <div class="newsletter color-1" id="subscribe" data-ng-controller="RegisterController"> <form name="registerForm"> <div class="col-md-6"> <input type="email" placeholder="[email protected]" data-ng-model="userEmail" required class="subscribe"> </div> <div class="col-md-2"> <button data-ng-click="register()" class="btn btn-primary pull-right btn-block">Subsbcribe</button> </div> </form> </div> And the controller is below app.controller('RegisterController', function ($scope,dataFactory) { $scope.users = dataFactory.getUsers(); $scope.register = function () { var userEmail = $scope.userEmail; dataFactory.insertUser(userEmail); $scope.userEmail = null; $scope.ThankYou = "Thank You!"; } }); The problem is that no validation is taking place when I click the button. It is always routed to the controller although I do not supply a correct email. So every time I click the button I get the {{ThankYou}} variable displayed. Maybe I do not understand something.

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >