Search Results

Search found 32641 results on 1306 pages for 'sql constraint and keys'.

Page 456/1306 | < Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >

  • MySQL: Creating table with FK error (errno 150)

    - by Peter Bailey
    I've tried searching on this error and nothing I've found helps me, so I apologize in advance if this is a duplicate and I'm just too dumb to find it. I've created a model with MySQL Workbench and am now attempting to install it to a mysql server. Using File Export Forward Engineer SQL CREATE Script... it outputs a nice big file for me, with all the settings I ask for. I switch over to MySQL GUI Tools (the Query Browser specifically) and load up this script (note that I'm going form one official MySQL tool to another). However, when I try to actually execute this file, I get the same error over and over SQLSTATE[HY000]: General error: 1005 Can't create table './srs_dev/location.frm' (errno: 150) "OK", I say to myself, something is wrong with the location table. So I check out the definition in the output file. SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0; SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0; SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='TRADITIONAL'; -- ----------------------------------------------------- -- Table `state` -- ----------------------------------------------------- DROP TABLE IF EXISTS `state` ; CREATE TABLE IF NOT EXISTS `state` ( `state_id` INT NOT NULL AUTO_INCREMENT , `iso_3166_2_code` VARCHAR(2) NOT NULL , `name` VARCHAR(60) NOT NULL , PRIMARY KEY (`state_id`) ) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `brand` -- ----------------------------------------------------- DROP TABLE IF EXISTS `brand` ; CREATE TABLE IF NOT EXISTS `brand` ( `brand_id` INT UNSIGNED NOT NULL AUTO_INCREMENT , `name` VARCHAR(45) NOT NULL , `domain` VARCHAR(45) NOT NULL , `manager_name` VARCHAR(100) NULL , `manager_email` VARCHAR(255) NULL , PRIMARY KEY (`brand_id`) ) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `location` -- ----------------------------------------------------- DROP TABLE IF EXISTS `location` ; CREATE TABLE IF NOT EXISTS `location` ( `location_id` INT UNSIGNED NOT NULL AUTO_INCREMENT , `name` VARCHAR(255) NOT NULL , `address_line_1` VARCHAR(255) NULL , `address_line_2` VARCHAR(255) NULL , `city` VARCHAR(100) NULL , `state_id` TINYINT UNSIGNED NULL DEFAULT NULL , `postal_code` VARCHAR(10) NULL , `phone_number` VARCHAR(20) NULL , `fax_number` VARCHAR(20) NULL , `lat` DECIMAL(9,6) NOT NULL , `lng` DECIMAL(9,6) NOT NULL , `contact_url` VARCHAR(255) NULL , `brand_id` TINYINT UNSIGNED NOT NULL , `summer_hours` VARCHAR(255) NULL , `winter_hours` VARCHAR(255) NULL , `after_hours_emergency` VARCHAR(255) NULL , `image_file_name` VARCHAR(100) NULL , `manager_name` VARCHAR(100) NULL , `manager_email` VARCHAR(255) NULL , `created_date` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP , PRIMARY KEY (`location_id`) , CONSTRAINT `fk_location_state` FOREIGN KEY (`state_id` ) REFERENCES `state` (`state_id` ) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `fk_location_brand` FOREIGN KEY (`brand_id` ) REFERENCES `brand` (`brand_id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; CREATE INDEX `fk_location_state` ON `location` (`state_id` ASC) ; CREATE INDEX `fk_location_brand` ON `location` (`brand_id` ASC) ; CREATE INDEX `idx_lat` ON `location` (`lat` ASC) ; CREATE INDEX `idx_lng` ON `location` (`lng` ASC) ; Looks ok to me. I surmise that maybe something is wrong with the Query Browser, so I put this file on the server and try to load it this way ] mysql -u admin -p -D dbname < path/to/create_file.sql And I get the same error. So I start to Google this issue and find all kinds of accounts that talk about an error with InnoDB style tables that fail with foreign keys, and the fix is to add "SET FOREIGN_KEY_CHECKS=0;" to the SQL script. Well, as you can see, that's already part of the file that MySQL Workbench spat out. So, my question is then, why is this not working when I'm doing what I think I'm supposed to be doing? Version Info: # MySQL: 5.0.45 GUI Tools: 1.2.17 Workbench: 5.0.30

    Read the article

  • HTG Reviews the CODE Keyboard: Old School Construction Meets Modern Amenities

    - by Jason Fitzpatrick
    There’s nothing quite as satisfying as the smooth and crisp action of a well built keyboard. If you’re tired of  mushy keys and cheap feeling keyboards, a well-constructed mechanical keyboard is a welcome respite from the $10 keyboard that came with your computer. Read on as we put the CODE mechanical keyboard through the paces. What is the CODE Keyboard? The CODE keyboard is a collaboration between manufacturer WASD Keyboards and Jeff Atwood of Coding Horror (the guy behind the Stack Exchange network and Discourse forum software). Atwood’s focus was incorporating the best of traditional mechanical keyboards and the best of modern keyboard usability improvements. In his own words: The world is awash in terrible, crappy, no name how-cheap-can-we-make-it keyboards. There are a few dozen better mechanical keyboard options out there. I’ve owned and used at least six different expensive mechanical keyboards, but I wasn’t satisfied with any of them, either: they didn’t have backlighting, were ugly, had terrible design, or were missing basic functions like media keys. That’s why I originally contacted Weyman Kwong of WASD Keyboards way back in early 2012. I told him that the state of keyboards was unacceptable to me as a geek, and I proposed a partnership wherein I was willing to work with him to do whatever it takes to produce a truly great mechanical keyboard. Even the ardent skeptic who questions whether Atwood has indeed created a truly great mechanical keyboard certainly can’t argue with the position he starts from: there are so many agonizingly crappy keyboards out there. Even worse, in our opinion, is that unless you’re a typist of a certain vintage there’s a good chance you’ve never actually typed on a really nice keyboard. Those that didn’t start using computers until the mid-to-late 1990s most likely have always typed on modern mushy-key keyboards and never known the joy of typing on a really responsive and crisp mechanical keyboard. Is our preference for and love of mechanical keyboards shining through here? Good. We’re not even going to try and hide it. So where does the CODE keyboard stack up in pantheon of keyboards? Read on as we walk you through the simple setup and our experience using the CODE. Setting Up the CODE Keyboard Although the setup of the CODE keyboard is essentially plug and play, there are two distinct setup steps that you likely haven’t had to perform on a previous keyboard. Both highlight the degree of care put into the keyboard and the amount of customization available. Inside the box you’ll find the keyboard, a micro USB cable, a USB-to-PS2 adapter, and a tool which you may be unfamiliar with: a key puller. We’ll return to the key puller in a moment. Unlike the majority of keyboards on the market, the cord isn’t permanently affixed to the keyboard. What does this mean for you? Aside from the obvious need to plug it in yourself, it makes it dead simple to repair your own keyboard cord if it gets attacked by a pet, mangled in a mechanism on your desk, or otherwise damaged. It also makes it easy to take advantage of the cable routing channels in on the underside of the keyboard to  route your cable exactly where you want it. While we’re staring at the underside of the keyboard, check out those beefy rubber feet. By peripherals standards they’re huge (and there is six instead of the usual four). Once you plunk the keyboard down where you want it, it might as well be glued down the rubber feet work so well. After you’ve secured the cable and adjusted it to your liking, there is one more task  before plug the keyboard into the computer. On the bottom left-hand side of the keyboard, you’ll find a small recess in the plastic with some dip switches inside: The dip switches are there to switch hardware functions for various operating systems, keyboard layouts, and to enable/disable function keys. By toggling the dip switches you can change the keyboard from QWERTY mode to Dvorak mode and Colemak mode, the two most popular alternative keyboard configurations. You can also use the switches to enable Mac-functionality (for Command/Option keys). One of our favorite little toggles is the SW3 dip switch: you can disable the Caps Lock key; goodbye accidentally pressing Caps when you mean to press Shift. You can review the entire dip switch configuration chart here. The quick-start for Windows users is simple: double check that all the switches are in the off position (as seen in the photo above) and then simply toggle SW6 on to enable the media and backlighting function keys (this turns the menu key on the keyboard into a function key as typically found on laptop keyboards). After adjusting the dip switches to your liking, plug the keyboard into an open USB port on your computer (or into your PS/2 port using the included adapter). Design, Layout, and Backlighting The CODE keyboard comes in two flavors, a traditional 87-key layout (no number pad) and a traditional 104-key layout (number pad on the right hand side). We identify the layout as traditional because, despite some modern trapping and sneaky shortcuts, the actual form factor of the keyboard from the shape of the keys to the spacing and position is as classic as it comes. You won’t have to learn a new keyboard layout and spend weeks conditioning yourself to a smaller than normal backspace key or a PgUp/PgDn pair in an unconventional location. Just because the keyboard is very conventional in layout, however, doesn’t mean you’ll be missing modern amenities like media-control keys. The following additional functions are hidden in the F11, F12, Pause button, and the 2×6 grid formed by the Insert and Delete rows: keyboard illumination brightness, keyboard illumination on/off, mute, and then the typical play/pause, forward/backward, stop, and volume +/- in Insert and Delete rows, respectively. While we weren’t sure what we’d think of the function-key system at first (especially after retiring a Microsoft Sidewinder keyboard with a huge and easily accessible volume knob on it), it took less than a day for us to adapt to using the Fn key, located next to the right Ctrl key, to adjust our media playback on the fly. Keyboard backlighting is a largely hit-or-miss undertaking but the CODE keyboard nails it. Not only does it have pleasant and easily adjustable through-the-keys lighting but the key switches the keys themselves are attached to are mounted to a steel plate with white paint. Enough of the light reflects off the interior cavity of the keys and then diffuses across the white plate to provide nice even illumination in between the keys. Highlighting the steel plate beneath the keys brings us to the actual construction of the keyboard. It’s rock solid. The 87-key model, the one we tested, is 2.0 pounds. The 104-key is nearly a half pound heavier at 2.42 pounds. Between the steel plate, the extra-thick PCB board beneath the steel plate, and the thick ABS plastic housing, the keyboard has very solid feel to it. Combine that heft with the previously mentioned thick rubber feet and you have a tank-like keyboard that won’t budge a millimeter during normal use. Examining The Keys This is the section of the review the hardcore typists and keyboard ninjas have been waiting for. We’ve looked at the layout of the keyboard, we’ve looked at the general construction of it, but what about the actual keys? There are a wide variety of keyboard construction techniques but the vast majority of modern keyboards use a rubber-dome construction. The key is floated in a plastic frame over a rubber membrane that has a little rubber dome for each key. The press of the physical key compresses the rubber dome downwards and a little bit of conductive material on the inside of the dome’s apex connects with the circuit board. Despite the near ubiquity of the design, many people dislike it. The principal complaint is that dome keyboards require a complete compression to register a keystroke; keyboard designers and enthusiasts refer to this as “bottoming out”. In other words, the register the “b” key, you need to completely press that key down. As such it slows you down and requires additional pressure and movement that, over the course of tens of thousands of keystrokes, adds up to a whole lot of wasted time and fatigue. The CODE keyboard features key switches manufactured by Cherry, a company that has manufactured key switches since the 1960s. Specifically the CODE features Cherry MX Clear switches. These switches feature the same classic design of the other Cherry switches (such as the MX Blue and Brown switch lineups) but they are significantly quieter (yes this is a mechanical keyboard, but no, your neighbors won’t think you’re firing off a machine gun) as they lack the audible click found in most Cherry switches. This isn’t to say that they keyboard doesn’t have a nice audible key press sound when the key is fully depressed, but that the key mechanism isn’t doesn’t create a loud click sound when triggered. One of the great features of the Cherry MX clear is a tactile “bump” that indicates the key has been compressed enough to register the stroke. For touch typists the very subtle tactile feedback is a great indicator that you can move on to the next stroke and provides a welcome speed boost. Even if you’re not trying to break any word-per-minute records, that little bump when pressing the key is satisfying. The Cherry key switches, in addition to providing a much more pleasant typing experience, are also significantly more durable than dome-style key switch. Rubber dome switch membrane keyboards are typically rated for 5-10 million contacts whereas the Cherry mechanical switches are rated for 50 million contacts. You’d have to write the next War and Peace  and follow that up with A Tale of Two Cities: Zombie Edition, and then turn around and transcribe them both into a dozen different languages to even begin putting a tiny dent in the lifecycle of this keyboard. So what do the switches look like under the classicly styled keys? You can take a look yourself with the included key puller. Slide the loop between the keys and then gently beneath the key you wish to remove: Wiggle the key puller gently back and forth while exerting a gentle upward pressure to pop the key off; You can repeat the process for every key, if you ever find yourself needing to extract piles of cat hair, Cheeto dust, or other foreign objects from your keyboard. There it is, the naked switch, the source of that wonderful crisp action with the tactile bump on each keystroke. The last feature worthy of a mention is the N-key rollover functionality of the keyboard. This is a feature you simply won’t find on non-mechanical keyboards and even gaming keyboards typically only have any sort of key roller on the high-frequency keys like WASD. So what is N-key rollover and why do you care? On a typical mass-produced rubber-dome keyboard you cannot simultaneously press more than two keys as the third one doesn’t register. PS/2 keyboards allow for unlimited rollover (in other words you can’t out type the keyboard as all of your keystrokes, no matter how fast, will register); if you use the CODE keyboard with the PS/2 adapter you gain this ability. If you don’t use the PS/2 adapter and use the native USB, you still get 6-key rollover (and the CTRL, ALT, and SHIFT don’t count towards the 6) so realistically you still won’t be able to out type the computer as even the more finger twisting keyboard combos and high speed typing will still fall well within the 6-key rollover. The rollover absolutely doesn’t matter if you’re a slow hunt-and-peck typist, but if you’ve read this far into a keyboard review there’s a good chance that you’re a serious typist and that kind of quality construction and high-number key rollover is a fantastic feature.  The Good, The Bad, and the Verdict We’ve put the CODE keyboard through the paces, we’ve played games with it, typed articles with it, left lengthy comments on Reddit, and otherwise used and abused it like we would any other keyboard. The Good: The construction is rock solid. In an emergency, we’re confident we could use the keyboard as a blunt weapon (and then resume using it later in the day with no ill effect on the keyboard). The Cherry switches are an absolute pleasure to type on; the Clear variety found in the CODE keyboard offer a really nice middle-ground between the gun-shot clack of a louder mechanical switch and the quietness of a lesser-quality dome keyboard without sacrificing quality. Touch typists will love the subtle tactile bump feedback. Dip switch system makes it very easy for users on different systems and with different keyboard layout needs to switch between operating system and keyboard layouts. If you’re investing a chunk of change in a keyboard it’s nice to know you can take it with you to a different operating system or “upgrade” it to a new layout if you decide to take up Dvorak-style typing. The backlighting is perfect. You can adjust it from a barely-visible glow to a blazing light-up-the-room brightness. Whatever your intesity preference, the white-coated steel backplate does a great job diffusing the light between the keys. You can easily remove the keys for cleaning (or to rearrange the letters to support a new keyboard layout). The weight of the unit combined with the extra thick rubber feet keep it planted exactly where you place it on the desk. The Bad: While you’re getting your money’s worth, the $150 price tag is a shock when compared to the $20-60 price tags you find on lower-end keyboards. People used to large dedicated media keys independent of the traditional key layout (such as the large buttons and volume controls found on many modern keyboards) might be off put by the Fn-key style media controls on the CODE. The Verdict: The keyboard is clearly and heavily influenced by the needs of serious typists. Whether you’re a programmer, transcriptionist, or just somebody that wants to leave the lengthiest article comments the Internet has ever seen, the CODE keyboard offers a rock solid typing experience. Yes, $150 isn’t pocket change, but the quality of the CODE keyboard is so high and the typing experience is so enjoyable, you’re easily getting ten times the value you’d get out of purchasing a lesser keyboard. Even compared to other mechanical keyboards on the market, like the Das Keyboard, you’re still getting more for your money as other mechanical keyboards don’t come with the lovely-to-type-on Cherry MX Clear switches, back lighting, and hardware-based operating system keyboard layout switching. If it’s in your budget to upgrade your keyboard (especially if you’ve been slogging along with a low-end rubber-dome keyboard) there’s no good reason to not pickup a CODE keyboard. Key animation courtesy of Geekhack.org user Lethal Squirrel.       

    Read the article

  • Regular Expression Transformation

    The regular expression transformation exposes the power of regular expression matching within the pipeline. One or more columns can be selected, and for each column an individual expression can be applied. The way multiple columns are handled can be set on the options page. The AND option means all columns must match, whilst the OR option means only one column has to match. If rows pass their tests then rows are passed down the successful match output. Rows that fail are directed down the alternate output. This transformation is ideal for validating data through the use of regular expressions. You can enter any expression you like, or select a pre-configured expression within the editor. You can expand the list of pre-configured expressions yourself. These are stored in a Xml file, %ProgramFiles%\Microsoft SQL Server\nnn\DTS\PipelineComponents\RegExTransform.xml, where nnn represents the folder version, 90 for 2005, 100 for 2008 and 110 for 2012. If you want to use regular expressions to manipulate data, rather than just validating it, try the RegexClean Transformation. The component is provided as an MSI file, however for 2005/200 you will have to add the transformation to the Visual Studio toolbox by hand. This process has been described in detail in the related FAQ entry for How do I install a task or transform component?, just select Regular Expression Transformation in the Choose Toolbox Items window. Downloads The Regular Expression Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Regular Expression Transformation for SQL Server 2005 Regular Expression Transformation for SQL Server 2008 Regular Expression Transformation for SQL Server 2012 Version History SQL Server 2012Version 2.0.0.87 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008Version 2.0.0.87 - Release for SQL Server 2008 Integration Services. (10 Oct 2008) SQL Server 2005 Version 1.1.0.93 - Added option for you to choose AND or OR logic when multiple columns have been selected. Previously behaviour was OR only. (31 Jul 2008) Version 1.0.0.76 - Installer update and improved exception handling. (28 Jan 2008) Version 1.0.0.41 - Update for user interface stability fixes. (2 Aug 2006) Version 1.0.0.24 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. (12 Jun 2006) Version 1.0.0.9 - Public Release for SQL Server 2005 IDW 15 June CTP (29 Aug 2005) Screenshots  

    Read the article

  • datagrid filter in c# using sql server

    - by malou17
    How to filter data in datagrid for example if u select the combo box in student number then input 1001 in the text field...all records in 1001 will appear in datagrid.....we are using sql server private void button2_Click(object sender, EventArgs e) { if (cbofilter.SelectedIndex == 0) { string sql; SqlConnection conn = new SqlConnection(); conn.ConnectionString = "Server= " + Environment.MachineName.ToString() + @"\; Initial Catalog=TEST;Integrated Security = true"; SqlDataAdapter da = new SqlDataAdapter(); DataSet ds1 = new DataSet(); ds1 = DBConn.getStudentDetails("sp_RetrieveSTUDNO"); sql = "Select * from Test where STUDNO like '" + txtvalue.Text + "'"; SqlCommand cmd = new SqlCommand(sql, conn); cmd.CommandType = CommandType.Text; da.SelectCommand = cmd; da.Fill(ds1); dbgStudentDetails.DataSource = ds1; dbgStudentDetails.DataMember = ds1.Tables[0].TableName; dbgStudentDetails.Refresh(); } else if (cbofilter.SelectedIndex == 1) { //string sql; //SqlConnection conn = new SqlConnection(); //conn.ConnectionString = "Server= " + Environment.MachineName.ToString() + @"\; Initial Catalog=TEST;Integrated Security = true"; //SqlDataAdapter da = new SqlDataAdapter(); //DataSet ds1 = new DataSet(); //ds1 = DBConn.getStudentDetails("sp_RetrieveSTUDNO"); //sql = "Select * from Test where Name like '" + txtvalue.Text + "'"; //SqlCommand cmd = new SqlCommand(sql,conn); //cmd.CommandType = CommandType.Text; //da.SelectCommand = cmd; //da.Fill(ds1); // dbgStudentDetails.DataSource = ds1; //dbgStudentDetails.DataMember = ds1.Tables[0].TableName; //ds.Tables[0].DefaultView.RowFilter = "Studno = + txtvalue.text + "; dbgStudentDetails.DataSource = ds.Tables[0]; dbgStudentDetails.Refresh(); } }

    Read the article

  • Validate MVC 2 form using Data annotations and Linq-to-SQL, before the model binder kicks in (with D

    - by Stefanvds
    I'm using linq to SQL and MVC2 with data annotations and I'm having some problems on validation of some types. For example: [DisplayName("Geplande sessies")] [PositiefGeheelGetal(ErrorMessage = "Ongeldige ingave. Positief geheel getal verwacht")] public string Proj_GeplandeSessies { get; set; } This is an integer, and I'm validating to get a positive number from the form. public class PositiefGeheelGetalAttribute : RegularExpressionAttribute { public PositiefGeheelGetalAttribute() : base(@"\d{1,7}") { } } Now the problem is that when I write text in the input, I don't get to see THIS error, but I get the errormessage from the modelbinder saying "The value 'Tomorrow' is not valid for Geplande sessies." The code in the controller: [HttpPost] public ActionResult Create(Projecten p) { if (ModelState.IsValid) { _db.Projectens.InsertOnSubmit(p); _db.SubmitChanges(); return RedirectToAction("Index"); } else { SelectList s = new SelectList(_db.Verbonds, "Verb_ID", "Verb_Naam"); ViewData["Verbonden"] = s; } return View(); } What I want is being able to run the Data Annotations before the Model binder, but that sounds pretty much impossible. What I really want is that my self-written error messages show up on the screen. I have the same problem with a DateTime, which i want the users to write in the specific form 'dd/MM/yyyy' and i have a regex for that. but again, by the time the data-annotations do their job, all i get is a DateTime Object, and not the original string. So if the input is not a date, the regex does not even run, cos the data annotations just get a null, cos the model binder couldn't make it to a DateTime. Does anyone have an idea how to make this work?

    Read the article

  • LINQ-to-SQL: How can I prevent 'objects you are adding to the designer use a different data connecti

    - by Timothy Khouri
    I am using Visual Studio 2010, and I have a LINQ-to-SQL DBML file that my colleagues and I are using for this project. We have a connection string in the web.config file that the DBML is using. However, when I drag a new table from my "Server Explorer" onto the DBML file... I get presented with a dialog that demands that do one of these two options: Allow visual studio to change the connection string to match the one in my solution explorer. Cancel the operation (meaning, I don't get my table). I don't really care too much about the debate as why the PMs/devs who made this tool didn't allow a third option - "Create the object anyway - don't worry, I'm a developer!" What I am thinking would be a good solution is if I can create a connection in the Server Explorer - WITHOUT A WIZARD. If I can just paste a connection string, that would be awesome! Because then the DBML designer won't freak out on me :O) If anyone knows the answer to this question, or how to do the above, please lemme know!

    Read the article

  • VB.net Debug sqldatareader - immediate window

    - by ScaryJones
    Scenario is this; I've a sqldatareader query that seems to be returning nothing when I try to convert the sqldatareader to a datatable using datatable.load. So I debug into it, I grab the verbose SQL query before it goes into the sqldatareader, just to make sure it's formatted correctly. I copy and paste this into SQL server to run it and see if it returns anything. It does, one row. I go back to visual studio and let the program continue, I create a datatable and try to load the sqldatareader but it just returns an empty reader. I'm baffled as to what's going on. I'll copy a version of the code (not the exact SQL query I'm using but close) here: Dim cn As New SqlConnection cn.ConnectionString = <connection string details here> cn.Open() Dim sqlQuery As String = "select * from Products where productid = 5" Dim cm As New SqlCommand(sqlQuery, cn) Dim dr As SqlDataReader = cm.ExecuteReader() Dim dt as new DataTable dt.load(dr) dt should have contents but it's empty. If I copy that SQL query into sql server and run it I get a row of results. Any ideas what I'm doing wrong? ######### UPDATE ############ I've now noticed that it seems to be returning one less row than I get with each SQL query. So, if I run the SQL myself and get 1 row then the datatable seems to have 0 rows. If the query returns 4 rows, the datatable has 3!! Very strange, any ideas anyone?

    Read the article

  • Cannot connect Linux XAMPP PHP to SQL Server database.

    - by Jim
    I've searched many sites without success. I'm using XAMPP 1.7.3a on Ubuntu 9.1. I have used the methods found at http://www.webcheatsheet.com/PHP/connect_mssql_database.php, they all fail. I am able to "connect" with a linked database through MS Access, however, that is not an acceptable solution as not all users will have Access. The first method (at webcheatsheet) uses mssql_connect, et.al. but I get this error from the mssql_connect() call: Warning: mssql_connect() [function.mssql-connect]: Unable to connect to server: [my server] in [my code] [my server] is the server address, I have used both the host name and the IP address. [my code] is a reference to the file and line number in my .php file. Is there a log file somewhere that would have more information about the failure, both on my machine and SQL Server? We do not have a bona-fide DBA, so I will need specific information to pass on if the issue seems to be on the server side. All assistance is appreciated, including RTFM when the location of the M is provided! Thanks

    Read the article

  • SQL: How to select rows from a table while ignoring the duplicate field values?

    - by Maxxon
    How to select rows from a table while ignoring the duplicate field values? Here is an example: id user_id message 1 Adam "Adam is here." 2 Peter "Hi there this is Peter." 3 Peter "I am getting sick." 4 Josh "Oh, snap. I'm on a boat!" 5 Tom "This show is great." 6 Laura "Textmate rocks." What i want to achive is to select the recently active users from my db. Let's say i want to select the 5 recently active users. The problem is, that the following script selects Peter twice. mysql_query("SELECT * FROM messages ORDER BY id DESC LIMIT 5 "); What i want is to skip the row when it gets again to Peter, and select the next result, in our case Adam. So i don't want to show my visitors that the recently active users were Laura, Tom, Josh, Peter, and Peter again. That does not make any sense, instead i want to show them this way: Laura, Tom, Josh, Peter, (skipping Peter) and Adam. Is there an SQL command i can use for this problem?

    Read the article

  • Sybase: how can I remove non-printable characters from CHAR or VARCHAR fields with SQL?

    - by Kenny Drobnack
    I'm working with a Sybase database that seems to have non-printable characters in some of the string fields and this is throwing off some of our processing code. At first glance, it seemed to only be newlines and carriage returns, but we also have an ASCII code 27 in there - an ESC character, some accented characters, and some other oddities in there. I have no direct access to change the database, so changing the bad data isn't an option, yet. For now I have to make do with just filtering it out. We're trying to export the table data from one database and load it into a database used by another application in a nightly batch process. Ideally, I'd like to have a function that I can pass a list of characters and just have Sybase return the data with those characters removed. I'd like to keep it something we could do in plain SQL if possible. Something like this to remove characters that are ASCII 0 - 31. select str_replace(FIELD1, (0-31), NULL) as FIELD1, str_replace(FIELD2, (0-31), NULL) as FIELD2 from TABLE So far, str_replace is the nearest I can find, but it only allows replacing one string with another. No support for character ranges and won't let me do the above. We're running on Sybase ASE 12.5 on Unix servers.

    Read the article

  • Sql serve Full Text Search with Containstable is very slow when Used in JOIN!

    - by Bob
    Hello, I am using sql 2008 full text search and I am having serious issues with performance depending on how I use Contains or ContainsTable. Here are sample: (table one has about 5000 records and there is a covered index on table1 which has all the fields in the where clause. I tried to simplify the statements so forgive me if there is syntax issues.) Scenario 1: select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select top 1 * from containstable(table1,*, 'something') as t2 where t2.[key]=t1.id) results: 10 second (very slow) Scenario 2: select * from table1 as t1 join containstable(table1,*, 'something') as t2 on t2.[key] = t1.id where t1.field1=90 and t1.field2='something' results: 10 second (very slow) Scenario 3: Declare @tbl Table(id uniqueidentifier primary key) insert into @tbl select {key] from containstable(table1,*, 'something') select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select id from @tbl as tbl where id=req1.id) results: fraction of a second (super fast) Bottom line, it seems if I use Containstable in any kind of join or where clause condition of a select statement that also has other conditions, the performance is really bad. In addition if you look at profiler, the number of reads from the database goes to the roof. But if I first do the full text search and put results in a table variable and use that variable everything goes super fast. The number of reads are also much lower. It seems in "bad" scenarios, somehow it gets stuck in a loop which causes it to read many times from teh database but of course I don't understant why. Now the question is first of all whyis that happening? and question two is that how scalable table variables are? what if it results to 10s of thousands of records? is it still going to be fast. Any ideas? Thanks

    Read the article

  • How can I test this SQL Server performance Utility?

    - by Martin Smith
    As part of my MSc I need to do a three month project later this year. I have decided to do something which will likely be useful for me in the workplace and spend the time getting to understand SQL Server internals. The deliverable for this project will be a performance advisor looking at a variety of different rules. Some static such as finding redundant indexes, some more dynamic such as using XEvents to find outlying invocations of stored procedure execution times when certain parameters are passed. I am struggling to come up with a good way of testing this though. I can obviously design a "bad" database and a synthetic workload that my tool will pick up issues on but I also need to demonstrate that it has real world utility. Looking at the self tuning database literature it is common to use TPC benchmarks but I've had a look at the TPCC site and it looks very time consuming to implement and not that good a fit to my project's testing needs in any event (I would still be able to "rig" it by the decisions I made on indexing or physical architecture). Plan A would be to find willing beta tester(s) but in the event that isn't possible I will need a fallback plan. The best idea I have come up with so far is to use the various MS sample applications as examples of real world applications. e.g. http://msftdpprodsamples.codeplex.com/ http://www.asp.net/community/projects/ Does anyone have any better suggestions?

    Read the article

  • How do I delete a foreign key in SQLAlchemy?

    - by Travis
    I'm using SQLAlchemy Migrate to keep track of database changes and I'm running into an issue with removing a foreign key. I have two tables, t_new is a new table, and t_exists is an existing table. I need to add t_new, then add a foreign key to t_exists. Then I need to be able to reverse the operation (which is where I'm having trouble). t_new = sa.Table("new", meta.metadata, sa.Column("new_id", sa.types.Integer, primary_key=True) ) t_exists = sa.Table("exists", meta.metadata, sa.Column("exists_id", sa.types.Integer, primary_key=True), sa.Column( "new_id", sa.types.Integer, sa.ForeignKey("new.new_id", onupdate="CASCADE", ondelete="CASCADE"), nullable=False ) ) This works fine: t_new.create() t_exists.c.new_id.create() But this does not: t_exists.c.new_id.drop() t_new.drop() Trying to drop the foreign key column gives an error: 1025, "Error on rename of '.\my_db_name\#sql-1b0_2e6' to '.\my_db_name\exists' (errno: 150)" If I do this with raw SQL, i can remove the foreign key manually then remove the column, but I haven't been able to figure out how to remove the foreign key with SQLAlchemy? How can I remove the foreign key, and then the column?

    Read the article

  • Creating and publishing exel file in MOSS 2007 using data from SQL sever.

    - by Diomos
    Hello, I need help in this matter: We have a template of exel file in which all calculations are already set. User can request a 'report'. Idea is to create a button on our site (SharePoint portal). After clicking on it a new exel file is generated. This means to get actual data from database (SQL server 2005 SP2), import them into template, let all calculations to generate proper data and then allow user to see this file. For now it's enough to publish final exel file in document library. I am quite new in WSS 3.0 and MOSS 2007 and I need some advice in what can be the best solution. Looks like a quite complex task for me. Is there some direct way how to accomplish this? Or maybe I need one tool to get data from database and to import this data into exel file (SSRS?) and other tool to publish it in document library (MOSS7 Exel services?). I heard something about PerformancePoint Server 2007, is this a way to follow? Thanks forward for any advice!

    Read the article

  • Inserting an image into sqlserver gives an "operand type clash"

    - by Termedi
    I'm trying to save an image in a sql server 2000 database. The data type of the column is image. Here is the code: Image Upload: <?php include('config.php'); if(is_uploaded_file($_FILES['userfile']['tmp_name'])) { $fileName = $_FILES['userfile']['name']; $tmpName = $_FILES['userfile']['tmp_name']; $fileSize = $_FILES['userfile']['size']; $fileType = $_FILES['userfile']['type']; $size = filesize($tmpName); set_magic_quotes_runtime(0);//to desactive the default escape spacials caracters made by PHP in the externes files $img_binaire = base64_encode(fread(fopen(str_replace("'","''",$tmpName), "r"), $size)); $query = "INSERT INTO test_image (image_name, image_content, image_size) ". "VALUES ('{$fileName}','{$img_binaire}', '{$size}')"; odbc_exec($conn, $query) or die('Error, query failed'); echo "<br>File $fileName uploaded<br>"; echo "<br>File Size: $fileSize <br>"; } ?> Image Show: <?php include('config.php'); $sql = "select * from test_image where id =2"; $rsl = odbc_exec($conn, $sql); $image_info = odbc_fetch_array($rsl); //$count = sizeof($image_info['image_content']); //header('Accept-Ranges: bytes'); //header('Content-Length: '.$image_info['image_size']); //header("Content-length: 17397"); header('Content-Type: image/jpeg'); echo base64_decode($image_info['image_content']); //echo bindec($image_info['image_content']); ?> It gives the following error: Error: Warning: odbc_exec() [function.odbc-exec]: SQL error: [Microsoft][ODBC SQL Server Driver][SQL Server]Operand type clash: text is incompatible with image, SQL state 22005 in SQLExecDirect in C:\xampp\htdocs\test\upload.php on line 25 Error, query failed What do I need to do differently?

    Read the article

  • SQL Server 2008: Comparing similar records - Need to still display an ID for a record when the JOIN has no matches

    - by aleppke
    I'm writing a SQL Server 2008 report that will compare genetic test results for animals. A genetic test consists of an animalId, a gene and a result. Not all animals will have the same genes tested but I need to be able to display the results side-by-side for a given set of animals and only include the genes that are present for at least one of the selected animals. My TestResult table has the following data in it: animalId gene result 1 a CC 1 b CT 1 d TT 2 a CT 2 b CT 2 c TT 3 a CT 3 b TT 3 c CC 3 d CC 3 e TT I need to generate a result set that looks like the following. Note that Animal 3 is not being displayed (user doesn't want to see its results) and neither are results for Gene "e" since neither Animal 1 nor Animal 2 have a result for that gene: SireID SireResult CalfID CalfResult Gene 1 CC 2 CT a 1 CT 2 CT b 1 NULL 2 TT c 1 TT 2 NULL d But I can only manage to get this: SireID SireResult CalfID CalfResult Gene 1 CC 2 CT a 1 CT 2 CT b NULL NULL 2 TT c 1 TT NULL NULL d This is the query I'm using. SELECT sire.animalId AS 'SireID' ,sire.result AS 'SireResult' ,calf.animalId AS 'CalfID' ,calf.result AS 'CalfResult' ,sire.gene AS 'Gene' FROM (SELECT s.animalId ,s.result ,m1.gene FROM (SELECT [animalId ] ,result ,gene FROM TestResult WHERE animalId IN (1)) s FULL JOIN (SELECT DISTINCT gene FROM TestResult WHERE animalId IN (1, 2)) m1 ON s.marker = m1.marker) sire FULL JOIN (SELECT c.animalId ,c.result ,m2.gene FROM (SELECT animalId ,result ,gene FROM TestResult WHERE animalId IN (2)) c FULL JOIN (SELECT DISTINCT gene FROM TestResult WHERE animalId IN (1, 2)) m2 ON c.gene = m2.gene) calf ON sire.gene = calf.gene How do I get the SireIDs and CalfIDs to display their values when they don't have a record associated with a particular Gene? I was thinking of using COALESCE but I can't figure out how to specify the correct animalId to pass in. Any help would be appreciated.

    Read the article

  • HOWTO - Compare a date string to datetime in SQL Server?

    - by Guy
    In SQL Server I have a DATETIME column which includes a time element. Example: '14 AUG 2008 14:23:019' What is the best method to only select the records for a particular day, ignoring the time part? Example: (Not safe, as it does not match the time part and returns no rows) DECLARE @p_date DATETIME SET @p_date = CONVERT( DATETIME, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE column_datetime = @p_date Note: Given this site is also about jotting down notes and techniques you pick up and then forget, I'm going to post my own answer to this question as DATETIME stuff in MSSQL is probably the topic I lookup most in SQLBOL. Update Clarified example to be more specific. Edit Sorry, But I've had to down-mod WRONG answers (answers that return wrong results). @Jorrit: WHERE (date>'20080813' AND date<'20080815') will return the 13th and the 14th. @wearejimbo: Close, but no cigar! badge awarded to you. You missed out records written at 14/08/2008 23:59:001 to 23:59:999 (i.e. Less than 1 second before midnight.)

    Read the article

  • Newbie T-SQL dynamic stored procedure -- how can I improve it?

    - by Andy Jones
    I'm new to T-SQL; all my experience is in a completely different database environment (Openedge). I've learned enough to write the procedure below -- but also enough to know that I don't know enough! This routine will have to go into a live environment soon, and it works, but I'm quite certain there are a number of c**k-ups and gotchas in it that I know nothing about. The routine copies data from table A to table B, replacing the data in table B. The tables could be in any database. I plan to call this routine multiple times from another stored procedure. Permissions aren't a problem: the routine will be run by the dba as a timed job. Could I have your suggestions as to how to make it fit best-practice? To bullet-proof it? ALTER PROCEDURE [dbo].[copyTable2Table] @sdb varchar(30), @stable varchar(30), @tdb varchar(30), @ttable varchar(30), @raiseerror bit = 1, @debug bit = 0 as begin set nocount on declare @source varchar(65) declare @target varchar(65) declare @dropstmt varchar(100) declare @insstmt varchar(100) declare @ErrMsg nvarchar(4000) declare @ErrSeverity int set @source = '[' + @sdb + '].[dbo].[' + @stable + ']' set @target = '[' + @tdb + '].[dbo].[' + @ttable + ']' set @dropStmt = 'drop table ' + @target set @insStmt = 'select * into ' + @target + ' from ' + @source set @errMsg = '' set @errSeverity = 0 if @debug = 1 print('Drop:' + @dropStmt + ' Insert:' + @insStmt) -- drop the target table, copy the source table to the target begin try begin transaction exec(@dropStmt) exec(@insStmt) commit end try begin catch if @@trancount > 0 rollback select @errMsg = error_message(), @errSeverity = error_severity() end catch -- update the log table insert into HHG_system.dbo.copyaudit (copytime, copyuser, source, target, errmsg, errseverity) values( getdate(), user_name(user_id()), @source, @target, @errMsg, @errSeverity) if @debug = 1 print ( 'Message:' + @errMsg + ' Severity:' + convert(Char, @errSeverity) ) -- handle errors, return value if @errMsg <> '' begin if @raiseError = 1 raiserror(@errMsg, @errSeverity, 1) return 1 end return 0 END Thanks!

    Read the article

  • ADO/SQL Server: What is the error code for "timeout expired"?

    - by Ian Boyd
    i'm trying to trap a "timeout expired" error from ADO. When a timeout happens, ADO returns: Number: 0x80040E31 (DB_E_ABORTLIMITREACHED in oledberr.h) SQLState: HYT00 NativeError: 0 The NativeError of zero makes sense, since the timeout is not a function of the database engine (i.e. SQL Server), but of ADO's internal timeout mechanism. The Number (i.e. the COM hresult) looks useful, but the definition of DB_E_ABORTLIMITREACHED in oledberr.h says: Execution stopped because a resource limit was reached. No results were returned. This error could apply to things besides "timeout expired" (some potentially server-side), such as a governor that limits: CPU usage I/O reads/writes network bandwidth and stops a query. The final useful piece is SQLState, which is a database-independent error code system. Unfortunately the only reference for SQLState error codes i can find have no mention of HYT00. What to do? What do do? Note: i can't trust 0x80040E31 (DB_E_ABORTLIMITREACHED) to mean "timeout expired", anymore than i could trust 0x80004005 (E_UNSPECIFIED_ERROR) to mean "Transaction was deadlocked on lock resources with another process and has been chosen as the deadlock victim". My pseudo-question becomes: does anyone have documentation on what the SQLState "HYT000" means? And my real question still remains: How can i specifically trap an ADO timeout expired exception thrown by ADO? Gotta love the questions where the developer is trying to "do the right thing", but nobody knows how to do the right thing. Also gotta love how googling for DB_E_ABORTLIMITREACHED and this question is #9, with MSDN nowhere to be found. Update 3 From the OLEdb ICommand.Execute reference: DB_E_ABORTLIMITREACHED Execution has been aborted because a resource limit has been reached. For example, a query timed out. No results have been returned. "For example", meaning not an exhaustive list.

    Read the article

  • How can I remove an "ALMOST" Duplicate using LINQ? ( OR SQL? )

    - by Atomiton
    This should be and easy one for the LINQ gurus out there. I'm doing a complex Query using UNIONS and CONTAINSTABLE in my database to return ranked results to my application. I'm getting duplicates in my returned data. This is expected. I'm using CONTAINSTABLE and CONTAINS to get all the results I need. CONTAINSTABLE is ranked by SQL and CONTAINS (which is run only on the Keywords field ) is hard-code-ranked by me. ( Sorry if that doesn't make sense ) Anyway, because the tuples aren't identical ( their rank is different ) a duplicate is returned. I figure the best way to deal with this is use LINQ. I know I'll be using the Distinct() extension method, but do I have to implement the IEqualityComparer interface? I'm a little fuzzy on how to do this. For argument's sake, say my resultset is structured like this class: class Content { ContentID int //KEY Rank int Description String } If I have a List<Content> how would I write the Distinct() method to exclude Rank? Ideally I'd like to keep the Content's highest Rank. SO, if one Content's RAnk is 112 and the other is 76. I'd like to keep the 112 rank. Hopefully I've given enough information.

    Read the article

  • Free tools/libraries to compare tables with filtering for SQL Server 2008 and visualize/sync diffs t

    - by MicMit
    I am building certain GUI in C# for a content manager and looking for the tools or code snippets or open libraries ( code ideally in C# ) which allow me the following : 1. For table A in database X (test ) and table A in database Y (production) and for a simple filter ( e.g. listname = "XYZ" ) I need to show additions/deletions/updates in some way. which might be side-by-side or just html report 2 record added html table with some fields 2 record deleted html table with some fields Considering that this task is very common, I guess, certain components should exist ? Components either return some collections from parameters given for further visualizing or just produce reports mentioned above. 2. I need to push changes for the filter I mentioned in 1 and update table in production database for this filter only ( ie for the particular list approved by content person). Again probably there are certain SQL code generators - components in addition to diffs or standalone. 3. The key thing tools/libraries - should be suitable for integration with the existing application in C#.

    Read the article

  • How can I factor out repeated expressions in an SQL Query? Column aliases don't seem to be the ticke

    - by Weston C
    So, I've got a query that looks something like this: SELECT id, DATE_FORMAT(CONVERT_TZ(callTime,'+0:00','-7:00'),'%b %d %Y') as callDate, DATE_FORMAT(CONVERT_TZ(callTime,'+0:00','-7:00'),'%H:%i') as callTimeOfDay, SEC_TO_TIME(callLength) as callLength FROM cs_calldata WHERE customerCode='999999-abc-blahblahblah' AND CONVERT_TZ(callTime,'+0:00','-7:00') >= '2010-04-25' AND CONVERT_TZ(callTime,'+0:00','-7:00') <= '2010-05-25' If you're like me, you probably start thinking that maybe it would improve readability and possibly the performance of this query if I wasn't asking it to compute CONVERT_TZ(callTime,'+0:00','-7:00') four separate times. So I try to create a column alias for that expression and replace further occurances with that alias: SELECT id, CONVERT_TZ(callTime,'+0:00','-7:00') as callTimeZoned, DATE_FORMAT(callTimeZoned,'%b %d %Y') as callDate, DATE_FORMAT(callTimeZoned,'%H:%i') as callTimeOfDay, SEC_TO_TIME(callLength) as callLength FROM cs_calldata WHERE customerCode='5999999-abc-blahblahblah' AND callTimeZoned >= '2010-04-25' AND callTimeZoned <= '2010-05-25' This is when I learned, to quote the MySQL manual: Standard SQL disallows references to column aliases in a WHERE clause. This restriction is imposed because when the WHERE clause is evaluated, the column value may not yet have been determined. So, that approach would seem to be dead in the water. How is someone writing queries with recurring expressions like this supposed to deal with it?

    Read the article

  • Linq-to-SQL: How to shape the data with group by?

    - by Cheeso
    I have an example database, it contains tables for Movies, People and Credits. The Movie table contains a Title and an Id. The People table contains a Name and an Id. The Credits table relates Movies to the People that worked on those Movies, in a particular role. The table looks like this: CREATE TABLE [dbo].[Credits] ( [Id] [int] IDENTITY (1, 1) NOT NULL PRIMARY KEY, [PersonId] [int] NOT NULL FOREIGN KEY REFERENCES People(Id), [MovieId] [int] NOT NULL FOREIGN KEY REFERENCES Movies(Id), [Role] [char] (1) NULL In this simple example, the [Role] column is a single character, by my convention either 'A' to indicate the person was an actor on that particular movie, or 'D' for director. I'd like to perform a query on a particular person that returns the person's name, plus a list of all the movies the person has worked on, and the roles in those movies. If I were to serialize it to json, it might look like this: { "name" : "Clint Eastwood", "movies" : [ { "title": "Unforgiven", "roles": ["actor", "director"] }, { "title": "Sands of Iwo Jima", "roles": ["director"] }, { "title": "Dirty Harry", "roles": ["actor"] }, ... ] } How can I write a LINQ-to-SQL query that shapes the output like that? I'm having trouble doing it efficiently. if I use this query: int personId = 10007; var persons = from p in db.People where p.Id == personId select new { name = p.Name, movies = (from m in db.Movies join c in db.Credits on m.Id equals c.MovieId where (c.PersonId == personId) select new { title = m.Title, role = (c.Role=="D"?"director":"actor") }) }; I get something like this: { "name" : "Clint Eastwood", "movies" : [ { "title": "Unforgiven", "role": "actor" }, { "title": "Unforgiven", "role": "director" }, { "title": "Sands of Iwo Jima", "role": "director" }, { "title": "Dirty Harry", "role": "actor" }, ... ] } ...but as you can see there's a duplicate of each movie for which Eastwood played multiple roles. How can I shape the output the way I want?

    Read the article

  • How to allow the user to select an input file for my SQL AdventureWorks web application?

    - by salvationishere
    I have a SQL and VS 2008 web application that I need to capture the full file path (directories and file name) from the selected file on the client computer. So user selects a file and then clicks on one of the buttons which transfers control to my code for processing. So how do I get its file path? I can get the file name, but not the path. Maybe problem is that while testing this web app, I am using the server instead of client machine. And I learned that on a server you cannot read the full file path due to security concerns. Is this true? If I just want this to work on client machine, which method/class do you recommend? And then how can I test it? My web app allows the user to select an input file and then map columns from the input .MDF file with columns from the selected Adventureworks table. Finally, after they click on the "Append" button, it adds rows from the input file to that table. So I want to capture the path info after they click this button.

    Read the article

  • How can I protect this code from SQL Injection? A bit confused.

    - by Craig Whitley
    I've read various sources but I'm unsure how to implement them into my code. I was wondering if somebody could give me a quick hand with it? Once I've been shown how to do it once in my code I'll be able to pick it up I think! This is from an AJAX autocomplete I found on the net, although I saw something to do with it being vulnerable to SQL Injection due to the '%$queryString%' or something? Any help really appreciated! if ( isset( $_POST['queryString'] ) ) { $queryString = $_POST['queryString']; if ( strlen( $queryString ) > 0 ) { $query = "SELECT game_title, game_id FROM games WHERE game_title LIKE '%$queryString%' || alt LIKE '%$queryString%' LIMIT 10"; $result = mysql_query( $query, $db ) or die( "There is an error in database please contact [email protected]" ); while ( $row = mysql_fetch_array( $result ) ) { $game_id = $row['game_id']; echo '<li onClick="fill(\'' . $row['game_title'] . '\',' . $game_id . ');">' . $row['game_title'] . '</li>'; } } }

    Read the article

< Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >