Search Results

Search found 16059 results on 643 pages for 'global temp tables'.

Page 282/643 | < Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >

  • Improving performance in this query

    - by Luiz Gustavo F. Gama
    I have 3 tables with user logins: sis_login = administrators tb_rb_estrutura = coordinators tb_usuario = clients I created a VIEW to unite all these users by separating them by levels, as follows: create view `login_names` as select `n1`.`cod_login` as `id`, '1' as `level`, `n1`.`nom_user` as `name` from `dados`.`sis_login` `n1` union all select `n2`.`id` as `id`, '2' as `level`, `n2`.`nom_funcionario` as `name` from `tb_rb_estrutura` `n2` union all select `n3`.`cod_usuario` as `id`, '3' as `level`, `n3`.`dsc_nome` as `name` from `tb_usuario` `n3`; So, can occur up to three ids repeated for different users, which is why I separated by levels. This VIEW is just to return me user name, according to his id and level. considering it has about 500,000 registered users, this view takes about 1 second to load. too much time, but is becomes very small when I need to return the latest posts on the forum of my website. The tables of the forums return the user id and level, then look for a name in this VIEW. I have registered 18 forums. When I run the query, it takes one second for each forum = 18 seconds. OMG. This page loads every time somebody enter my website. This is my query: select `x`.`forum_id`, `x`.`topic_id`, `l`.`nome` from ( select `t`.`forum_id`, `t`.`topic_id`, `t`.`data`, `t`.`user_id`, `t`.`user_level` from `tb_forum_topics` `t` union all select `a`.`forum_id`, `a`.`topic_id`, `a`.`data`, `a`.`user_id`, `a`.`user_level` from `tb_forum_answers` `a` ) `x` left outer join `login_names` `l` on `l`.`id` = `x`.`user_id` and `l`.`level` = `x`.`user_level` group by `x`.`forum_id` asc USING EXPLAIN: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 6 Using temporary; Using filesort 1 PRIMARY <derived4> ALL NULL NULL NULL NULL 530415 4 DERIVED n1 ALL NULL NULL NULL NULL 114 5 UNION n2 ALL NULL NULL NULL NULL 2 6 UNION n3 ALL NULL NULL NULL NULL 530299 NULL UNION RESULT ALL NULL NULL NULL NULL NULL 2 DERIVED t ALL NULL NULL NULL NULL 3 3 UNION r ALL NULL NULL NULL NULL 3 NULL UNION RESULT ALL NULL NULL NULL NULL NULL Somebody can help me or give a suggestion?

    Read the article

  • Where do objects merge/join data in a 3-tier model?

    - by BerggreenDK
    Its probarbly a simple 3-tier problem. I just want to make sure we use the best practice for this and I am not that familiary with the structures yet. We have the 3 tiers: GUI: ASP.NET for Presentation-layer (first platform) BAL: Business-layer will be handling the logic on a webserver in C#, so we both can use it for webforms/MVC + webservices DAL: LINQ to SQL in the Data-layer, returning BusinessObjects not LINQ. DB: The SQL will be Microsoft SQL-server/Express (havent decided yet). Lets think of setup where we have a database of [Persons]. They can all have multiple [Address]es and we have a complete list of all [PostalCode] and corresponding citynames etc. The deal is that we have joined a lot of details from other tables. {Relations}/[tables] [Person]:1 --- N:{PersonAddress}:M --- 1:[Address] [Address]:N --- 1:[PostalCode] Now we want to build the DAL for Person. How should the PersonBO look and when does the joins occure? Is it a business-layer problem to fetch all citynames and possible addressses pr. Person? or should the DAL complete all this before returning the PersonBO to the BAL ? Class PersonBO { public int ID {get;set;} public string Name {get;set;} public List<AddressBO> {get;set;} // Question #1 } // Q1: do we retrieve the objects before returning the PersonBO and should it be an Array instead? or is this totally wrong for n-tier/3-tier?? Class AddressBO { public int ID {get;set;} public string StreetName {get;set;} public int PostalCode {get;set;} // Question #2 } // Q2: do we make the lookup or just leave the PostalCode for later lookup? Can anyone explain in what order to pull which objects? Constructive criticism is very welcome. :o)

    Read the article

  • Mysql many to many problem (leaderborad/scoreboard)

    - by zoko2902
    Hi all! I'm working on a small project in regards of the upcoming World Cup. I'm building a roster/leaderboard/scoredboard based on groups with national teams. The idea is to have information on all upcoming matches within the group or in the knockout phase (scores, time of the match, match stats etc.). Currently I'm stuck with the DB in that I can't come up with a query that would return paired teams in a row. I have these 3 tables: CREATE TABLE IF NOT EXISTS `wc_team` ( `id` INT NOT NULL AUTO_INCREMENT , `name` VARCHAR(45) NULL , `description` VARCHAR(250) NULL , `flag` VARCHAR(45) NULL , `image` VARCHAR(45) NULL , `added` TIMESTAMP NULL DEFAULT CURRENT_TIMESTAMP , PRIMARY KEY (`id`) , CREATE TABLE IF NOT EXISTS `wc_match` ( `id` INT NOT NULL AUTO_INCREMENT , `score` VARCHAR(6) NULL , `date` DATE NULL , `time` VARCHAR(45) NULL , `added` TIMESTAMP NULL DEFAULT CURRENT_TIMESTAMP , PRIMARY KEY (`id`) , CREATE TABLE IF NOT EXISTS `wc_team_has_match` ( `wc_team_id` INT NOT NULL , `wc_match_id` INT NOT NULL , PRIMARY KEY (`wc_team_id`, `wc_match_id`) , I've simplified the tables so we don't go in the wrong direction. Now I've tried al kinds of joins and groupings I could think of, but I never seem to get. Example guery: SELECT t.wc_team_id,t.wc_match_id,c.id.c.name,d.id,d.name FROM wc_team_has_match AS t LEFT JOIN wc_match AS s ON t.wc_match_id = s.id LEFT JOIN wc_team AS c ON t.wc_team_id = c.id LEFT JOIN wc_team AS d ON t.wc_team_id = d.id Which returns: wc_team_id wc_match_id id name id name 16 5 16 Brazil 16 Brazil 18 5 18 Argentina 18 Argentina But what I really want is: wc_team_id wc_match_id id name id name 16 5 16 Brazil 18 Argentina Keep in mind that a group has more matches I want to see all those matches not only one. Any pointer or suggestion would be extremly appreciated since I'm stuck like a duck on this one :).

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • SQL Structure of DB table with different types of columns

    - by Dmitry Dvornikov
    I have a problem with the optimization of the structure of the database. I'll try to explain it exactly. I create a project, where we can add different values??, but this values must have different types of the columns in the database (eg, int, double , varchar). What is the best way to store the different types of values ??in the database. In the project I'm using Propel 1.6. The point is availability to add value with 'int', 'varchar' and other columns types, to search the table was efficient. In total, I have two ideas. The first is to create a table of "value", which will have columns: "id ", "value_int", "value_double", "value_varchar", etc - with the corresponding column types. Depending on the type of values??, records will be saved with the value in the appropriate column (the rest will be NULL). The second solution is to create separate tables such as "value_int", "value_varchar" etc. There would be columns: "id", "value", which correspond to the relevant types of "value" (ie, such as int, varchar, etc). I must admit that I do not believe any of the above solutions, originally I was thinking about one table "value", where the column would be a "text" type - but this solution would probably be even worse. I would like to know your opinion on this topic, maybe something else would be better. Thanks in advance. EDIT: For example : We have three tables: USER: [table of users] * id * name FIELD: [table of profile fields - where the column 'type' is the type of field, eg int or varchar) * id * type * name VALUE : * id * User_id - ( FK user.id ) * Field_id - ( FK field.id ) * value So we have in each row an user in USER table, and the profile is stored in the VALUE table. Bit each profile field may have a different type (column 'type' in the FIELD table), and based on that I would want this value to add to the appropriate column of the appropriate type.

    Read the article

  • edmx - The operation could not be completed - When adding Inheritance

    - by vdh_ant
    Hey guys I have an edmx model which I have draged 2 tables onto - One called 'File' and the other 'ApplicaitonFile'. These two tables have a 1 to 1 relationship in the database. If I stop here everything works fine. But in my model, I want 'ApplicaitonFile' to inherit from 'File'. So I delete the 1 to 1 relationship then configure 'ApplicaitonFile' from 'File' and then remove the FileId from 'ApplicaitonFile' which was the primary key. (Note I am following the instructions from here). If I leave the model open at this point everything is fine, but as soon as I close it, if I try and reopen it again I get the following error "The operation could not be completed". I have been searching for a solution and found this - http://stackoverflow.com/questions/944050/entity-model-does-not-load but as far as I can tell I don't have a duplicate InheritanceConnectors (although I don't know exactly what I'm looking for but I can't see anything out of the ordinary - like 2 connectors with the same name) and the relationship I originally have is a 1 to 1 not a 1 to 0..1 Any ideas??? this is driving me crazy...

    Read the article

  • C# Dataset Dynamically Add DataColumn

    - by Wesley
    I am trying to add a extra column to a dataset after a query has completed. I have a database relationship of the following: Employees / \ Groups EmployeeGroups Empoyees holds all the data for that individual, I'll name the unique key the UserID. Groups holds all the groups that a employee can be a part of, i.e. Super User, Admin, User; etc. I will name the unique key GroupID EmployeeGroups holds all the associations of which groups each employee belongs too. (UserID | GroupID) What I am trying to accomplish is after querying for a all users I want to loop though each user and add what groups that user is a part of by adding a new column to the dataset named 'Groups' which is a string to insert the values of the next query to get all the groups that user is a part of. Then by user of databinding populate a listview with all employees and their group associations My code is as follows; Position 5 is the new column I am trying to add to the dataset. string theQuery = "select UserID, FirstName, LastName, EmployeeID, Active from Employees"; DataSet theEmployeeSet = itsDatabase.runQuery(theQuery); DataColumn theCol = new DataColumn("Groups", typeof(string)); theEmployeeSet.Tables[0].Columns.Add(theCol); foreach (DataRow theRow in theEmployeeSet.Tables[0].Rows) { theRow.ItemArray[5] = "1234"; } At the moment, the code will create the new column but when i assign the data to that column nothing will be assigned, what am I missing? If there is any further explination or information I can provide, please let me know. Thank you all

    Read the article

  • How to design data storage for partitioned tagging system?

    - by Morgan Cheng
    How to design data storage for huge tagging system (like digg or delicious)? There is already discussion about it, but it is about centralized database. Since the data is supposed to grow, we'll need to partition the data into multiple shards soon or later. So, the question turns to be: How to design data storage for partitioned tagging system? The tagging system basically has 3 tables: Item (item_id, item_content) Tag (tag_id, tag_title) TagMapping(map_id, tag_id, item_id) That works fine for finding all items for given tag and finding all tags for given item, if the table is stored in one database instance. If we need to partition the data into multiple database instances, it is not that easy. For table Item, we can partition its content with its key item_id. For table Tag, we can partition its content with its key tag_id. For example, we want to partition table Tag into K databases. We can simply choose number (tag_id % K) database to store given tag. But, how to partition table TagMapping? The TagMapping table represents the many-to-many relationship. I can only image to have duplication. That is, same content of TagMappping has two copies. One is partitioned with tag_id and the other is partitioned with item_id. In scenario to find tags for given item, we use partition with tag_id. If scenario to find items for given tag, we use partition with item_id. As a result, there is data redundancy. And, the application level should keep the consistency of all tables. It looks hard. Is there any better solution to solve this many-to-many partition problem?

    Read the article

  • Is it better to use a relational database or document-based database for an app like Wufoo?

    - by mboyle
    I'm working on an application that's similar to Wufoo in that it allows our users to create their own databases and collect/present records with auto generated forms and views. Since every user is creating a different schema (one user might have a database of their baseball card collection, another might have a database of their recipes) our current approach is using MySQL to create separate databases for every user with its own tables. So in other words, the databases our MySQL server contains look like: main-web-app-db (our web app containing tables for users account info, billing, etc) user_1_db (baseball_cards_table) user_2_db (recipes_table) .... And so on. If a user wants to set up a new database to keep track of their DVD collection, we'd do a "create database ..." with "create table ...". If they enter some data in and then decide they want to change a column we'd do an "alter table ....". Now, the further along I get with building this out the more it seems like MySQL is poorly suited to handling this. 1) My first concern is that switching databases every request, first to our main app's database for authentication etc, and then to the user's personal database, is going to be inefficient. 2) The second concern I have is that there's going to be a limit to the number of databases a single MySQL server can host. Pretending for a moment this application had 500,000 user databases, is MySQL designed to operate this way? What if it were a million, or more? 3) Lastly, is this method going to be a nightmare to support and scale? I've never heard of MySQL being used in this way so I do worry about how this affects things like replication and other methods of scaling. To me, it seems like MySQL wasn't built to be used in this way but what do I know. I've been looking at document-based databases like MongoDB, CouchDB, and Redis as alternatives because it seems like a schema-less approach to this particular problem makes a lot of sense. Can anyone offer some advice on this?

    Read the article

  • VB.NET Update Access Database with DataTable

    - by sinDizzy
    I've been perusing some hep forums and some help books but cant seem to get my head wrapped around this. My task is to read data from two text files and then load that data into an existing MS Access 2007 database. So here is what i'm trying to do: Read data from first text file and for every line of data add data to a DataTable using CarID as my unique field. Read data from second text file and look for existing CarID in DataTable if exists update that row. If it doesnt exist add a new row. once im done push the contents of the DataTable to the database. What i have so far: Dim sSQL As String = "SELECT * FROM tblCars" Dim da As New OleDb.OleDbDataAdapter(sSQL, conn) Dim ds As New DataSet da.Fill(ds, "CarData") Dim cb As New OleDb.OleDbCommandBuilder(da) 'loop read a line of text and parse it out. gets dd, dc, and carId 'create a new empty row Dim dsNewRow As DataRow = ds.Tables("CarData").NewRow() 'update the new row with fresh data dsNewRow.Item("DriveDate") = dd dsNewRow.Item("DCode") = dc dsNewRow.Item("CarNum") = carID 'about 15 more fields 'add the filled row to the DataSet table ds.Tables("CarData").Rows.Add(dsNewRow) 'end loop 'update the database with the new rows da.Update(ds, "CarData") Questions: In constructing my table i use "SELECT * FROM tblCars" but what if that table has millions of records already. Is that not a waste of resources? Should i be trying something different if i want to update with new records? Once Im done with the first text file i then go to my next text file. Whats the best approach here: To First look for an existing record based on CarNum or to create a second table and then merge the two at the end? Finally when the DataTable is done being populated and im pushing it to the database i want to make sure that if records already exist with three primary fields (DriveDate, DCode, and CarNum) that they get updated with new fields and if it doesn't exist then those records get appended. Is that possible with my process? tia AGP

    Read the article

  • can someone help me fix my code?

    - by user267490
    Hi, I have this code I been working on but I'm having a hard time for it to work. I did one but it only works in php 5.3 and I realized my host only supports php 5.0! do I was trying to see if I could get it to work on my sever correctly, I'm just lost and tired lol <?php //Temporarily turn on error reporting @ini_set('display_errors', 1); error_reporting(E_ALL); // Set default timezone (New PHP versions complain without this!) date_default_timezone_set("GMT"); // Common set_time_limit(0); require_once('dbc.php'); require_once('sessions.php'); page_protect(); // Image settings define('IMG_FIELD_NAME', 'cons_image'); // Max upload size in bytes (for form) define ('MAX_SIZE_IN_BYTES', '512000'); // Width and height for the thumbnail define ('THUMB_WIDTH', '150'); define ('THUMB_HEIGHT', '150'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head> <title>whatever</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <style type="text\css"> .validationerrorText { color:red; font-size:85%; font-weight:bold; } </style> </head> <body> <h1>Change image</h1> <?php $errors = array(); // Process form if (isset($_POST['submit'])) { // Get filename $filename = stripslashes($_FILES['cons_image']['name']); // Validation of image file upload $allowedFileTypes = array('image/gif', 'image/jpg', 'image/jpeg', 'image/png'); if ($_FILES[IMG_FIELD_NAME]['error'] == UPLOAD_ERR_NO_FILE) { $errors['img_empty'] = true; } elseif (($_FILES[IMG_FIELD_NAME]['type'] != '') && (!in_array($_FILES[IMG_FIELD_NAME]['type'], $allowedFileTypes))) { $errors['img_type'] = true; } elseif (($_FILES[IMG_FIELD_NAME]['error'] == UPLOAD_ERR_INI_SIZE) || ($_FILES[IMG_FIELD_NAME]['error'] == UPLOAD_ERR_FORM_SIZE) || ($_FILES[IMG_FIELD_NAME]['size'] > MAX_SIZE_IN_BYTES)) { $errors['img_size'] = true; } elseif ($_FILES[IMG_FIELD_NAME]['error'] != UPLOAD_ERR_OK) { $errors['img_error'] = true; } elseif (strlen($_FILES[IMG_FIELD_NAME]['name']) > 200) { $errors['img_nametoolong'] = true; } elseif ( (file_exists("\\uploads\\{$username}\\images\\banner\\{$filename}")) || (file_exists("\\uploads\\{$username}\\images\\banner\\thumbs\\{$filename}")) ) { $errors['img_fileexists'] = true; } if (! empty($errors)) { unlink($_FILES[IMG_FIELD_NAME]['tmp_name']); //cleanup: delete temp file } // Create thumbnail if (empty($errors)) { // Make directory if it doesn't exist if (!is_dir("\\uploads\\{$username}\\images\\banner\\thumbs\\")) { // Take directory and break it down into folders $dir = "uploads\\{$username}\\images\\banner\\thumbs"; $folders = explode("\\", $dir); // Create directory, adding folders as necessary as we go (ignore mkdir() errors, we'll check existance of full dir in a sec) $dirTmp = ''; foreach ($folders as $fldr) { if ($dirTmp != '') { $dirTmp .= "\\"; } $dirTmp .= $fldr; mkdir("\\".$dirTmp); //ignoring errors deliberately! } // Check again whether it exists if (!is_dir("\\uploads\\$username\\images\\banner\\thumbs\\")) { $errors['move_source'] = true; unlink($_FILES[IMG_FIELD_NAME]['tmp_name']); //cleanup: delete temp file } } if (empty($errors)) { // Move uploaded file to final destination if (! move_uploaded_file($_FILES[IMG_FIELD_NAME]['tmp_name'], "/uploads/$username/images/banner/$filename")) { $errors['move_source'] = true; unlink($_FILES[IMG_FIELD_NAME]['tmp_name']); //cleanup: delete temp file } else { // Create thumbnail in new dir if (! make_thumb("/uploads/$username/images/banner/$filename", "/uploads/$username/images/banner/thumbs/$filename")) { $errors['thumb'] = true; unlink("/uploads/$username/images/banner/$filename"); //cleanup: delete source file } } } } // Record in database if (empty($errors)) { // Find existing record and delete existing images $sql = "SELECT `bannerORIGINAL`, `bannerTHUMB` FROM `agent_settings` WHERE (`agent_id`={$user_id}) LIMIT 1"; $result = mysql_query($sql); if (!$result) { unlink("/uploads/$username/images/banner/$filename"); //cleanup: delete source file unlink("/uploads/$username/images/banner/thumbs/$filename"); //cleanup: delete thumbnail file die("<div><b>Error: Problem occurred with Database Query!</b><br /><br /><b>File:</b> " . __FILE__ . "<br /><b>Line:</b> " . __LINE__ . "<br /><b>MySQL Error Num:</b> " . mysql_errno() . "<br /><b>MySQL Error:</b> " . mysql_error() . "</div>"); } $numResults = mysql_num_rows($result); if ($numResults == 1) { $row = mysql_fetch_assoc($result); // Delete old files unlink("/uploads/$username/images/banner/" . $row['bannerORIGINAL']); //delete OLD source file unlink("/uploads/$username/images/banner/thumbs/" . $row['bannerTHUMB']); //delete OLD thumbnail file } // Update/create record with new images if ($numResults == 1) { $sql = "INSERT INTO `agent_settings` (`agent_id`, `bannerORIGINAL`, `bannerTHUMB`) VALUES ({$user_id}, '/uploads/$username/images/banner/$filename', '/uploads/$username/images/banner/thumbs/$filename')"; } else { $sql = "UPDATE `agent_settings` SET `bannerORIGINAL`='/uploads/$username/images/banner/$filename', `bannerTHUMB`='/uploads/$username/images/banner/thumbs/$filename' WHERE (`agent_id`={$user_id})"; } $result = mysql_query($sql); if (!$result) { unlink("/uploads/$username/images/banner/$filename"); //cleanup: delete source file unlink("/uploads/$username/images/banner/thumbs/$filename"); //cleanup: delete thumbnail file die("<div><b>Error: Problem occurred with Database Query!</b><br /><br /><b>File:</b> " . __FILE__ . "<br /><b>Line:</b> " . __LINE__ . "<br /><b>MySQL Error Num:</b> " . mysql_errno() . "<br /><b>MySQL Error:</b> " . mysql_error() . "</div>"); } } // Print success message and how the thumbnail image created if (empty($errors)) { echo "<p>Thumbnail created Successfully!</p>\n"; echo "<img src=\"/uploads/$username/images/banner/thumbs/$filename\" alt=\"New image thumbnail\" />\n"; echo "<br />\n"; } } if (isset($errors['move_source'])) { echo "\t\t<div>Error: Failure occurred moving uploaded source image!</div>\n"; } if (isset($errors['thumb'])) { echo "\t\t<div>Error: Failure occurred creating thumbnail!</div>\n"; } ?> <form action="" enctype="multipart/form-data" method="post"> <input type="hidden" name="MAX_FILE_SIZE" value="<?php echo MAX_SIZE_IN_BYTES; ?>" /> <label for="<?php echo IMG_FIELD_NAME; ?>">Image:</label> <input type="file" name="<?php echo IMG_FIELD_NAME; ?>" id="<?php echo IMG_FIELD_NAME; ?>" /> <?php if (isset($errors['img_empty'])) { echo "\t\t<div class=\"validationerrorText\">Required!</div>\n"; } if (isset($errors['img_type'])) { echo "\t\t<div class=\"validationerrorText\">File type not allowed! GIF/JPEG/PNG only!</div>\n"; } if (isset($errors['img_size'])) { echo "\t\t<div class=\"validationerrorText\">File size too large! Maximum size should be " . MAX_SIZE_IN_BYTES . "bytes!</div>\n"; } if (isset($errors['img_error'])) { echo "\t\t<div class=\"validationerrorText\">File upload error occured! Error code: {$_FILES[IMG_FIELD_NAME]['error']}</div>\n"; } if (isset($errors['img_nametoolong'])) { echo "\t\t<div class=\"validationerrorText\">Filename too long! 200 Chars max!</div>\n"; } if (isset($errors['img_fileexists'])) { echo "\t\t<div class=\"validationerrorText\">An image file already exists with that name!</div>\n"; } ?> <br /><input type="submit" name="submit" id="image1" value="Upload image" /> </form> </body> </html> <?php ################################# # # F U N C T I O N S # ################################# /* * Function: make_thumb * * Creates the thumbnail image from the uploaded image * the resize will be done considering the width and * height defined, but without deforming the image * * @param $sourceFile Path anf filename of source image * @param $destFile Path and filename to save thumbnail as * @param $new_w the new width to use * @param $new_h the new height to use */ function make_thumb($sourceFile, $destFile, $new_w=false, $new_h=false) { if ($new_w === false) { $new_w = THUMB_WIDTH; } if ($new_h === false) { $new_h = THUMB_HEIGHT; } // Get image extension $ext = strtolower(getExtension($sourceFile)); // Copy source switch($ext) { case 'jpg': case 'jpeg': $src_img = imagecreatefromjpeg($sourceFile); break; case 'png': $src_img = imagecreatefrompng($sourceFile); break; case 'gif': $src_img = imagecreatefromgif($sourceFile); break; default: return false; } if (!$src_img) { return false; } // Get dimmensions of the source image $old_x = imageSX($src_img); $old_y = imageSY($src_img); // Calculate the new dimmensions for the thumbnail image // 1. calculate the ratio by dividing the old dimmensions with the new ones // 2. if the ratio for the width is higher, the width will remain the one define in WIDTH variable // and the height will be calculated so the image ratio will not change // 3. otherwise we will use the height ratio for the image // as a result, only one of the dimmensions will be from the fixed ones $ratio1 = $old_x / $new_w; $ratio2 = $old_y / $new_h; if ($ratio1 > $ratio2) { $thumb_w = $new_w; $thumb_h = $old_y / $ratio1; } else { $thumb_h = $new_h; $thumb_w = $old_x / $ratio2; } // Create a new image with the new dimmensions $dst_img = ImageCreateTrueColor($thumb_w, $thumb_h); // Resize the big image to the new created one imagecopyresampled($dst_img, $src_img, 0, 0, 0, 0, $thumb_w, $thumb_h, $old_x, $old_y); // Output the created image to the file. Now we will have the thumbnail into the file named by $filename switch($ext) { case 'jpg': case 'jpeg': $result = imagepng($dst_img, $destFile); break; case 'png': $result = imagegif($dst_img, $destFile); break; case 'gif': $result = imagejpeg($dst_img, $destFile); break; default: //should never occur! } if (!$result) { return false; } // Destroy source and destination images imagedestroy($dst_img); imagedestroy($src_img); return true; } /* * Function: getExtension * * Returns the file extension from a given filename/path * * @param $str the filename to get the extension from */ function getExtension($str) { return pathinfo($str, PATHINFO_EXTENSION); } ?>

    Read the article

  • LINQ Joins - Performance

    - by Meiscooldude
    I am curious on how exactly LINQ (not LINQ to SQL) is performing is joins behind the scenes in relation to how Sql Server performs joins. Sql Server before executing a query, generates an Execution Plan. The Execution Plan is basically an Expression Tree on what it believes is the best way to execute the query. Each node provides information on whether to do a Sort, Scan, Select, Join, ect. On a 'Join' node in our execution plan, we can see three possible algorithms; Hash Join, Merge Join, and Nested Loops Join. Sql Server will choose which algorithm to for each Join operation based on expected number of rows in Inner and Outer tables, what type of join we are doing (some algorithms don't support all types of joins), whether we need data ordered, and probably many other factors. Join Algorithms: Nested Loop Join: Best for small inputs, can be optimized with ordered inner table. Merge Join: Best for medium to large inputs sorted inputs, or an output that needs to be ordered. Hash Join: Best for medium to large inputs, can be parallelized to scale linearly. LINQ Query: DataTable firstTable, secondTable; ... var rows = from firstRow in firstTable.AsEnumerable () join secondRow in secondTable.AsEnumerable () on firstRow.Field<object> (randomObject.Property) equals secondRow.Field<object> (randomObject.Property) select new {firstRow, secondRow}; SQL Query: SELECT * FROM firstTable fT INNER JOIN secondTable sT ON fT.Property = sT.Property Sql Server might use a Nested Loop Join if it knows there are a small number of rows from each table, a merge join if it knows one of the tables has an index, and Hash join if it knows there are a lot of rows on either table and neither has an index. Does Linq choose its algorithm for joins? or does it always use one?

    Read the article

  • Partioning with Hibernate

    - by Alex
    Hello, We have a requirement to delete data in the range of 200K from database everyday. Our application is Java/JEE based using Oracle DB and Hibernate ORM tool. We explored various options like Hibernate batch processing Stored procedure Database partitioning Our DBA suggests database partitioning is the best way to go, so we can easily recreate and drop the partitioned table everyday. Now the issue is we have 2 kinds of data, one which we want to delete everyday and the other which we want to keep it. Suppose this data is stored in table "Trade". Now with partitioning, we have 2 tables "Trade". We have already existing Hibernate based DAO layer to fetch/store trades from/to DB. When we decide to partition the database, how can we control the trades to go in which of the two tables through hibernate. Basically I want , the trades need to be deleted by end of the day, to go in partitioned table and the trades I want to keep, in main table. Please suggest how can this be possible with Hibernate. We may add an additional column to identify the trades to be deleted but how can we ensure these trades should go to partitioned trade table using hibernate. I would appreciate if someone can suggest any better approach in case we are on wrong path.

    Read the article

  • Performance of Multiple Joins

    - by geeko
    Greetings Overflowers, I need to query against objects with many/complex spacial conditions. In relational databases that is translated to many joins (possibly 10+). I'm new to this business and wondering whether to go with MS SQL Server 2008 R2 or Oracle 11g or document-based solutions such as RavenDB or simply go with some spacial database (GIS)... Any thoughts ? Regards UPDATE: Thank you all for your answers. Would anybody opt for document/spatial databases ? My database would consist of tens of millions to few billion records. Mostly read-only. Almost no updates unless in case of mistakes in input. Overnight inserts and not that frequent. The join tables are predicted beforehand but the number of self joins (tables joining themselves multiple times) is not. Small pages of results from such queries are going to be viewed on an highly interactive website so response time is critical. Any predictions on how this can perform on MS SQL Server 2008 R2 or Oracle 11g ? I'm also concerned about boosting performance by adding more servers, which one scales better ? How about PostgresQL ?

    Read the article

  • SQL server 2008 trigger not working correct with multiple inserts

    - by Rob
    I've got the following trigger; CREATE TRIGGER trFLightAndDestination ON checkin_flight AFTER INSERT,UPDATE AS BEGIN IF NOT EXISTS ( SELECT 1 FROM Flight v INNER JOIN Inserted AS i ON i.flightnumber = v.flightnumber INNER JOIN checkin_destination AS ib ON ib.airport = v.airport INNER JOIN checkin_company AS im ON im.company = v.company WHERE i.desk = ib.desk AND i.desk = im.desk ) BEGIN RAISERROR('This combination of of flight and check-in desk is not possible',16,1) ROLLBACK TRAN END END What i want the trigger to do is to check the tables Flight, checkin_destination and checkin_company when a new record for checkin_flight is added. Every record of checkin_flight contains a flightnumber and desknumber where passengers need to check in for this destination. The tables checkin_destination and checkin_company contain information about companies and destinations restricted to certain checkin desks. When adding a record to checkin_flight i need information from the flight table to get the destination and flightcompany with the inserted flightnumber. This information needs to be checked against the available checkin combinations for flights, destinations and companies. I'm using the trigger as stated above, but when i try to insert a wrong combination the trigger allows it. What am i missing here?

    Read the article

  • import csv file/excel into sql database asp.net

    - by kiev
    Hi everyone! I am starting a project with asp.net visual studio 2008 / SQL 2000 (2005 in future) using c#. The tricky part for me is that the existing DB schema changes often and the import files columns will all have to me matched up with the existing db schema since they may not be one to one match on column names. (There is a lookup table that provides the tables schema with column names I will use) I am exploring different ways to approach this, and need some expert advice. Is there any existing controls or frameworks that I can leverage to do any of this? So far I explored FileUpload .NET control, as well as some 3rd party upload controls to accomplish the upload such as SlickUpload but the files uploaded should be < 500mb Next part is reading of my csv /excel and parsing it for display to the user so they can match it with our db schema. I saw CSVReader and others but for excel its more difficult since I will need to support different versions. Essentially The user performing this import will insert and/or update several tables from this import file. There are other more advance requirements like record matching but and preview of the import records, but I wish to get through understanding how to do this first. Update: I ended up using csvReader with LumenWorks.Framework for uploading the csv files.

    Read the article

  • PHP driven site needs password change.

    - by Drea
    I have inherited a website that needs the password changed that accesses the database. I can see that there are two tables within the database but neither of them have username or password info. The previous web guy moved out of the country and can't be reached. I am not up-to-speed enough to figure this out. I have gone through all the files to try and find the answer but can't get it. It's hosted by goDaddy.com and I have changed the passwords there but it didn't change this login info. www.executivehomerents.com/cpanel <-this brings up the prompt for the username & password which I won't give out but the page only gives you 5 choices and none of them deal with changing the password. They are simply to change the data in the tables. If you go to: http://www.websitedatabases.com/ <=this is the company that the PHPMagic program was purchased from-- They have no contact number. Here is another page that might help: http://www.executivehomerents.com/cpanel/dbinput/setup.php I don't think I'll get an answer to this but it's worth a try... thanks.

    Read the article

  • How would you organize this in asp.net mvc?

    - by chobo
    I have an asp.net mvc 2.0 application that contains Areas/Modules like calendar, admin, etc... There may be cases where more than one area needs to access the same Repo, so I am not sure where to put the Data Access Layers and Repositories. First Option: Should I create Data Access Layer files (Linq to SQL in my case) with their accompanying Repositories for each area, so each area only contains the Tables, and Repositories needed by those areas. The benefit is that everything needed to run that module is one place, so it is more encapsulated (in my mind anyway). The downside is that I may have duplicate queries, because other modules may use the same query. Second Option Or, would it be better to place the DAL and Repositories outside the Area's and treat them as Global? The advantage is I won't have any duplicate queries, but I may be loading a lot of unnecessary queries and DAL tables up for certain modules. It is also more work to reuse or modify these modules for future projects (though the chance of reusing them is slim to none :)) Which option makes more sense? If someone has a better way I'd love to hear it. Thanks!

    Read the article

  • In an Android application, should I have one content provider per table or only one for the entire a

    - by Andrew Dyer
    I have years of experience with Microsoft .NET development (primarily C#) and have been working to come up to speed on Android and Java. So far, I've built a small application with a couple screens and a working content provider. All of the examples I've seen for developing content providers typically work with a single table, so I got the impression that this was the convention. I built a couple more content providers for other tables and ran into the "Unknown URI" IllegalArgumentException when I tried to test them. The exception is being thrown by one of my content providers, but not the one I was intending to call. It appears that my application is using the first content provider in the AndroidManifest.xml file, which now has me wondering if I should only have a single content provider for the entire application. Are there any best practices and/or examples for working with multiple tables in an Android application? Should I have one content provider per table or only one for the entire application? If the former, how do I resolve URIs to the proper provider? If the latter, how do I keep my content provider code from being polluted with switch statements?

    Read the article

  • Problem with SQL Server "EXECUTE AS"

    - by Vilx-
    I've got the following setup: There is a SQL Server DB with several tables that have triggers set on them (that collect history data). These triggers are CLR stored procedures with EXECUTE AS 'HistoryUser'. The HistoryUser user is a simple user in the database without a login. It has enough permissions to read from all tables and write to the history table. When I backup the DB and then restore it to another machine (Virtual Machine in this case, but it does not matter), the triggers don't work anymore. In fact, no impersonation for the user works anymore. Even a simple statement such as this exec ('select 3') as user='HistoryUser' produces an error: Cannot execute as the database principal because the principal "HistoryUser" does not exist, this type of principal cannot be impersonated, or you do not have permission. I read in MSDN that this can occur if the DB owner is a domain user, but it isn't. And even if I change it to anything else (their recommended solution) this problem remains. If I create another user without login, I can use it for impersonation just fine. That is, this works just fine: create user TestUser without login go exec ('select 3') as user='TestUser' I do not want to recreate all those triggers, so is there any way how I can make the existing HistoryUser work? Bump: Sorry, but this is kinda urgent...

    Read the article

  • MySQL 5.5.8 Gets Periodic Lag

    - by CYREX
    Am using MySQL 5.5.8 on an Ubuntu system and every X amount of time it creates a huge lag that lasts a couple of seconds. Then all goes back to normal until the next lag. The time period varies but it looks like it happen periodically. Am using InnoDB. It is like hiccups in the MySQL. What could be creating this sort of periodic problem. Do not have any cron jobs or process running every time the X period happens. The X period could be between 30 minutes to 2 hours. So for example it could happen every 30 minutes for the next 12 hours or it could happen every 2 hours for the next 8 hours. key_buffer_size = 256M max_allowed_packet = 1M table_cache = 1024 table_open_cache = 1024 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 32M thread_cache_size = 128 query_cache_size= 128M log-slow-queries = slow.log long_query_time = 5 log-queries-not-using-indexes # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4 max_connections=512 #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /usr/local/mysql/data # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1G #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 64M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 0 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 64M sort_buffer_size = 64M read_buffer = 2M write_buffer = 2M There are about 200+ tables divided in 3 databases. The most written too is in InnoDB. The other ones are more read. Several of the tables in the InnoDB have more than 2 million records. The other databases top at about 400 thousand records and do not change so often. The PC is a Core 2 Duo 8400 with 4GB RAM, 32Bit Ubuntu.

    Read the article

  • DB Design to store custom fields for a table

    - by Fazal
    Hi All, this question came up based on the responses I got for the question http://stackoverflow.com/questions/2785033/getting-wierd-issue-with-to-number-function-in-oracle As everyone suggested that storing Numeric values in VARCHAR2 columns is not a good practice (which I totally agree with), I am wondering about a basic Design choice our team has made and whether there are better way to design. Problem Statement : We Have many tables where we want to give certain number of custom fields. The number of required custom fields is known, but what kind of attribute is mapped to the column is available to the user E.g. I am putting down a hypothetical scenario below Say you have a laptop which stores 50 attribute values for every laptop record. Each laptop attributes are created by the some admin who creates the laptop. A user created a laptop product lets say lap1 with attributes String, String, numeric, numeric, String Second user created laptop lap2 with attributes String,numeric,String,String,numeric Currently there data in our design gets persisted as following Laptop Table Id Name field1 field2 field3 field4 field5 1 lap1 lappy lappy 12 13 lappy 2 lap2 lappy2 13 lappy2 lapp2 12 This example kind of simulates our requirement and our design Now here if somebody is lookinup records for lap2 table doing a comparison on field2, We need to apply TO_NUMBER. select * from laptop where name='lap2' and TO_NUMBER(field2) < 15 TO_NUMBER fails in some cases when query plan decides to first apply to_number instead of the other filter. QUESTION Is this a valid design? What are the other alternative ways to solve this problem One of our team mates suggested creating tables on the fly for such cases. Is that a good idea How do popular ORM tools give custom fields or flex fields handling? I hope I was able to make sense in the question. Sorry for such a long text.. This causes us to use TO_NUMBER when queryio

    Read the article

  • How can I manage a FIFO-queue in an database with SQL?

    - by Jonas
    I have two tables in my database, one for In and one for Out. They have two columns, Quantity and Price. How can I write a SQL-query that selects the correct price? In example: If I have 3 items in for 75 and then 3 items in for 80. Then I have two out for 75, and the third out should be for 75 (X) and the fourth out should be for 80 (Y). How can I write the price query for X and Y? They should use the price from the third and forth row. In example, is there any way to SELECT the third row in the In-table? I can not use auto_increment as identifier for i.e. "third" row, because the tables will contain post for other items too. The rows will not be deleted, they will be saved for accountability reasons. SELECT Price FROM In WHERE ...? NEW database design: +----+ | In | +----+------+-------+ | Supply_ID | Price | +-----------+-------+ | 1 | 75 | | 1 | 75 | | 1 | 75 | | 2 | 80 | | 2 | 80 | +-----------+-------+ +-----+ | Out | +-----+-------+-------+ | Delivery_ID | Price | +-------------+-------+ | 1 | 75 | | 1 | 75 | | 2 | X | <- ? | 3 | Y | <- ? +-------------+-------+ OLD database design: +----+ | In | +----+------+----------+-------+ | Supply_ID | Quantity | Price | +-----------+----------+-------+ | 1 | 3 | 75 | | 2 | 3 | 80 | +-----------+----------+-------+ +-----+ | Out | +-----+-------+----------+-------+ | Delivery_ID | Quantity | Price | +-------------+----------+-------+ | 1 | 2 | 75 | | 2 | 1 | X | <- ? | 3 | 1 | Y | <- ? +-------------+----------+-------+

    Read the article

  • C#: Using one Data Table in order to fill 2 different comboboxes?

    - by odiseh
    Hi all. I have 2 comboboxes on a form (form1) called combobox1 and combobox2. Each comboboxes should be filled with data stored in 2 different tables in Sql server 2005: table1 and table2 I mean: combobox1 -- table1 combobox2 -- table2 I fill data table with proper data and then bind the comboboxes to it separately. My problem is: after filling 2 combos, both of them have equal data got from table2. This is my code: DataTable tb1 = new DataTable(); //Filling tb1 with data got from table1 combobox1.Items.Clear(); combobox1.DataSource = tb1; combobox1.DisplayMember = "Name"; combobox1.ValueMember = "ID"; combobox1.SelectedIndex = -1; //filling tb1 with data got from table2 combobox2.Items.Clear(); combobox2.DataSource = tb1; combobox2.DisplayMember = "Name"; combobox2.ValueMember = "ID"; combobox2.SelectedIndex = -1; What's wrong? It seems that if I get 2 different data tables (tb1 and tb2) , every thing will be all right. Any suggestions please. Thank you

    Read the article

  • Select rows from table1 and all the children from table2 into an object

    - by Patrick
    I want to pull data from table "Province_Notifiers" and also fetch all corresponding items from table "Province_Notifier_Datas". The table "Province_Notifier" has a guid to identify it (PK), table "Province_Notifier_Datas" has a column called BelongsToProvinceID witch is a foreign key to the "Province_Notifier" tables guid. I tried something like this: var records = from data in ctx.Province_Notifiers where DateTime.Now >= data.SendTime && data.Sent == false join data2 in ctx.Province_Notifier_Datas on data.Province_ID equals data2.BelongsToProvince_ID select new Province_Notifier { Email = data.Email, Province_ID = data.Province_ID, ProvinceName = data.ProvinceName, Sent = data.Sent, UserName = data.UserName, User_ID = data.User_ID, Province_Notifier_Datas = (new List<Province_Notifier_Data>().AddRange(data2)) }; This line is not working and i am trying to figure out how topull the data from table2 into that Province_Notifier_Datas variable. Province_Notifier_Datas = (new List<Province_Notifier_Data>().AddRange(data2)) I can add a record easily by adding the second table row into the Province_Notifier_Datas but i can't fetch it back. Province_Notifier dbNotifier = new Province_Notifier(); // set some values for dbNotifier dbNotifier.Province_Notifier_Datas.Add( new Province_Notifier_Data { BelongsToProvince_ID = userInput.Value.ProvinceId, EventText = GenerateNotificationDetail(notifierDetail) }); This works and inserts the data correctly into both tables. Edit: These error messages is thrown: Cannot convert from 'Province_Notifier_Data' to 'System.Collections.Generic.IEnumerable' If i look in Visual Studio, the variable "Province_Notifier_Datas" is of type System.Data.Linq.EntitySet The best overloaded method match for 'System.Collections.Generic.List.AddRange(System.Collections.Generic.IEnumerable)' has some invalid arguments Edit: var records = from data in ctx.Province_Notifiers where DateTime.Now >= data.SendTime && data.Sent == false join data2 in ctx.Province_Notifier_Datas on data.Province_ID equals data2.BelongsToProvince_ID into data2list select new Province_Notifier { Email = data.Email, Province_ID = data.Province_ID, ProvinceName = data.ProvinceName, Sent = data.Sent, UserName = data.UserName, User_ID = data.User_ID, Province_Notifier_Datas = new EntitySet<Province_Notifier_Data>().AddRange(data2List) }; Error 3 The name 'data2List' does not exist in the current context.

    Read the article

< Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >