Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 981/1300 | < Previous Page | 977 978 979 980 981 982 983 984 985 986 987 988  | Next Page >

  • How can I use Lucene for personal name (first name, last name) search?

    - by os111
    I'm writing a search feature for a database of NFL players. The user enters a search string like "Jason Campbell" or "Campbell" or "Jason". I'm having trouble getting the appropriate results. Which Analyzer should I use when indexing? Which Query when querying? Should I distinguish between first name and last name or just index the full name string? I'd like the following behavior: Query: "Jason Campbell" - Result: exact match for 1 player, Jason Campbell Query: "Campbell" - Result: all players with Campbell in their name Query: "Jason" - Result: all players with Jason in their name Query: "Cambel" [misspelled] - Result: all players with Campbell in their name

    Read the article

  • Is post-sudden-power-loss filesystem corruption on an SSD drive's ext3 partition "expected behavior"?

    - by Jeremy Friesner
    My company makes an embedded Debian Linux device that boots from an ext3 partition on an internal SSD drive. Because the device is an embedded "black box", it is usually shut down the rude way, by simply cutting power to the device via an external switch. This is normally okay, as ext3's journalling keeps things in order, so other than the occasional loss of part of a log file, things keep chugging along fine. However, we've recently seen a number of units where after a number of hard-power-cycles the ext3 partition starts to develop structural issues -- in particular, we run e2fsck on the ext3 partition and it finds a number of issues like those shown in the output listing at the bottom of this Question. Running e2fsck until it stops reporting errors (or reformatting the partition) clears the issues. My question is... what are the implications of seeing problems like this on an ext3/SSD system that has been subjected to lots of sudden/unexpected shutdowns? My feeling is that this might be a sign of a software or hardware problem in our system, since my understanding is that (barring a bug or hardware problem) ext3's journalling feature is supposed to prevent these sorts of filesystem-integrity errors. (Note: I understand that user-data is not journalled and so munged/missing/truncated user-files can happen; I'm specifically talking here about filesystem-metadata errors like those shown below) My co-worker, on the other hand, says that this is known/expected behavior because SSD controllers sometimes re-order write commands and that can cause the ext3 journal to get confused. In particular, he believes that even given normally functioning hardware and bug-free software, the ext3 journal only makes filesystem corruption less likely, not impossible, so we should not be surprised to see problems like this from time to time. Which of us is right? Embedded-PC-failsafe:~# ls Embedded-PC-failsafe:~# umount /mnt/unionfs Embedded-PC-failsafe:~# e2fsck /dev/sda3 e2fsck 1.41.3 (12-Oct-2008) embeddedrootwrite contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Invalid inode number for '.' in directory inode 46948. Fix<y>? yes Directory inode 46948, block 0, offset 12: directory corrupted Salvage<y>? yes Entry 'status_2012-11-26_14h13m41.csv' in /var/log/status_logs (46956) has deleted/unused inode 47075. Clear<y>? yes Entry 'status_2012-11-26_10h42m58.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47076. Clear<y>? yes Entry 'status_2012-11-26_11h29m41.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47080. Clear<y>? yes Entry 'status_2012-11-26_11h42m13.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47081. Clear<y>? yes Entry 'status_2012-11-26_12h07m17.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47083. Clear<y>? yes Entry 'status_2012-11-26_12h14m53.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47085. Clear<y>? yes Entry 'status_2012-11-26_15h06m49.csv' in /var/log/status_logs (46956) has deleted/unused inode 47088. Clear<y>? yes Entry 'status_2012-11-20_14h50m09.csv' in /var/log/status_logs (46956) has deleted/unused inode 47073. Clear<y>? yes Entry 'status_2012-11-20_14h55m32.csv' in /var/log/status_logs (46956) has deleted/unused inode 47074. Clear<y>? yes Entry 'status_2012-11-26_11h04m36.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47078. Clear<y>? yes Entry 'status_2012-11-26_11h54m45.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47082. Clear<y>? yes Entry 'status_2012-11-26_12h12m20.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47084. Clear<y>? yes Entry 'status_2012-11-26_12h33m52.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47086. Clear<y>? yes Entry 'status_2012-11-26_10h51m59.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47077. Clear<y>? yes Entry 'status_2012-11-26_11h17m09.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47079. Clear<y>? yes Entry 'status_2012-11-26_12h54m11.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47087. Clear<y>? yes Pass 3: Checking directory connectivity '..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953). Fix<y>? yes Couldn't fix parent of inode 46948: Couldn't find parent directory entry Pass 4: Checking reference counts Unattached inode 46945 Connect to /lost+found<y>? yes Inode 46945 ref count is 2, should be 1. Fix<y>? yes Inode 46953 ref count is 5, should be 4. Fix<y>? yes Pass 5: Checking group summary information Block bitmap differences: -(208264--208266) -(210062--210068) -(211343--211491) -(213241--213250) -(213344--213393) -213397 -(213457--213463) -(213516--213521) -(213628--213655) -(213683--213688) -(213709--213728) -(215265--215300) -(215346--215365) -(221541--221551) -(221696--221704) -227517 Fix<y>? yes Free blocks count wrong for group #6 (17247, counted=17611). Fix<y>? yes Free blocks count wrong (161691, counted=162055). Fix<y>? yes Inode bitmap differences: +(47089--47090) +47093 +47095 +(47097--47099) +(47101--47104) -(47219--47220) -47222 -47224 -47228 -47231 -(47347--47348) -47350 -47352 -47356 -47359 -(47457--47488) -47985 -47996 -(47999--48000) -48017 -(48027--48028) -(48030--48032) -48049 -(48059--48060) -(48062--48064) -48081 -(48091--48092) -(48094--48096) Fix<y>? yes Free inodes count wrong for group #6 (7608, counted=7624). Fix<y>? yes Free inodes count wrong (61919, counted=61935). Fix<y>? yes embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED ***** embeddedrootwrite: ********** WARNING: Filesystem still has errors ********** embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks Embedded-PC-failsafe:~# Embedded-PC-failsafe:~# e2fsck /dev/sda3 e2fsck 1.41.3 (12-Oct-2008) embeddedrootwrite contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Directory entry for '.' in ... (46948) is big. Split<y>? yes Missing '..' in directory inode 46948. Fix<y>? yes Setting filetype for entry '..' in ... (46948) to 2. Pass 3: Checking directory connectivity '..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953). Fix<y>? yes Pass 4: Checking reference counts Inode 2 ref count is 12, should be 13. Fix<y>? yes Pass 5: Checking group summary information embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED ***** embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks Embedded-PC-failsafe:~# Embedded-PC-failsafe:~# e2fsck /dev/sda3 e2fsck 1.41.3 (12-Oct-2008) embeddedrootwrite: clean, 657/62592 files, 87882/249937 blocks

    Read the article

  • Avoid implicit conversion from date to timestamp for selects with Oracle 11g using Hibernate

    - by sapporo
    I'm using Hibernate 3.2.1 criteria queries to select rows from an Oracle 11g database, filtering by a timestamp field. The field in question is of type java.util.Date in Java, and DATE in Oracle. It turns out that the field gets mapped to java.sql.Timestamp, and Oracle converts all rows to TIMESTAMP before comparing to the passed in value, bypassing the index and thereby ruining performance. One solution would be to use Hibernate's sqlRestriction() along with Oracle's TO_DATE function. That would fix performance, but requires rewriting the application code (lots of queries). So is there a more elegant solution? Since Hibernate already does type mapping, could it be configured to do the right thing?

    Read the article

  • Is there any real benefit to using ASP.Net Authentication with ASP.Net MVC?

    - by alchemical
    I've been researching this intensely for the past few days. We're developing an ASP.Net MVC site that needs to support 100,000+ users. We'd like to keep it fast, scalable, and simple. We have our own SQL database tables for user and user_role, etc. We are not using server controls. Given that there are no server controls, and a custom membershipProvider would need to be created, where is there any benefit left to use ASP.Net Auth/Membership? The other alternative would seem to be to create custom code to drop a UniqueID CustomerID in a cookie and authenticate with that. Or, if we're paranoid about sniffers, we could encrypt the cookie as well. Is there any real benefit in this scenario (MVC and customer data is in our own tables) to using the ASP.Net auth/membership framework, or is the fully custom solution a viable route?

    Read the article

  • Can the same CriteriaBuilder (JPA 2) instance be used to create multiple queries?

    - by pkainulainen
    This seems like a pretty simple question, but I have not managed to find a definitive answer yet. I have a DAO class, which is naturally querying the database by using criteria queries. So I would like to know if it is safe to use the same CriteriaBuilder implementation for the creation of different queries or do I have to create new CriteriaBuilder instance for each query. Following code example should illustrate what I would like to do: public class DAO() { CriteriaBuilder cb = null; public DAO() { cb = getEntityManager().getCriteriaBuilder(); } public List<String> getNames() { CriteriaQuery<String> nameSearch = cb.createQuery(String.class); ... } public List<Address> getAddresses(String name) { CriteriaQuery<Address> nameSearch = cb.createQuery(Address.class); ... } } Is it ok to do this?

    Read the article

  • fire and forget compared to http request

    - by cometta
    Hi, i looking for opinion from you all. I have a web application that need to records data into another web application database. I not prefer to use http request GET on 2nd application because of latency issue. I looking for fast way to save records on 2nd application quickly, i came across the idea of "fire and forget" , will JMX suit for this scenario? from my understanding jmx will gurantee message delivery. Let say i need to call at least 1000 random requests per seconds to 2nd application should i use jmx? http request? or xmpp instead?

    Read the article

  • ASP.NET MVC2 Data Access Layer

    - by Paul
    For a small/medium sized project I'm trying to figure out what is the 'ideal' way to have a domain layer and data access layer. My opinions on coupling tend to be more towards the view that the domain models should not be tightly coupled with the database layer, in other words the data access layer shouldn't actually know anything about the domain objects. I've been looking at Linq-to-sql and it wants to use its own models that it creates, and so it ends up VERY tightly coupled. Whilst I love the way you use linq-to-sql in code I really don't like the way it wants to make its own domain objects. What are some alternatives that I should consider? I tried use NHibernate but I did not like the way I had to use to query and get different objects. I honestly love the syntax and way you use linq, I just don't want it to be so tightly coupled to domain objects.

    Read the article

  • WPF DataGrid issue with db40

    - by Rich Blumer
    I am using the following code to populate a wpf datagrid with items in my db4o OODB: IObjectContainer db = Db4oEmbedded.OpenFile(Db4oEmbedded.NewConfiguration(), "C:\Dev\ContractKeeper\Database\ContractKeeper.yap"); var contractTypes = db.Query(typeof(ContractType)); this.dataGrid1.ItemsSource = contractTypes.ToList(); Here is the XAML: <Window x:Class="ContractKeeper.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:dg="http://schemas.microsoft.com/wpf/2008/toolkit" Title="Window1" Height="300" Width="300"> <Grid> <dg:DataGrid AutoGenerateColumns="True" Margin="12,102,12,24" Name="dataGrid1" /> </Grid> </Window> When the items get bound to the datagrid, the gridlines appear like there are records but no data is displayed. Has anyone had this issue with db4o and the wpf datagrid?

    Read the article

  • Refernce platform specific System.Data.SQLite

    - by Dmitriy Nagirnyak
    Hi, I am using SQLite for the unit testing and might use it as a database for local development/staging. The System.Data.SQLite has basically 2 versions: x86 and x64. Correct one should be used for the specific platform. I have 64 bit Win7, other guys in the team might use 32-bit OSs. The server's platform is not known at this stage. If I use 32-bit version of the assembly on 64-bit platform I get BadImageFormatException: Could not load file or assembly 'System.Data.SQLite'. I believe similar will happen trying to use 64-bit assembly on 32-bit platform. So my question is what is the best way to reference the SQLite assembly so that it does not depend on the platform and people can just use it? It is ok to use 32-bit version of assembly on a 64-bit platform (Maybe there is a switch for that somewhere?). Thanks, Dmitriy.

    Read the article

  • Display MySQL BLOB'd image on webpage without separate script (PHP)

    - by robhardgood
    So I've got some images stored in a MySQL database as BLOB's (I know it's better to just store the directory and do it that way, but this is what I need to do for now) and I need to display them on a webpage. Now, I know how to make a script and give it an image header and pull the img src from there, but I have a lot of images from different places for different uses, so I'd have to make a ton of these scripts and I'd rather not clutter up my files like that. Anyway, does anyone know of a function or something I can use to display the image that will run on the same page?

    Read the article

  • Using PHP/MySQL with Google Maps

    - by Anders Kitson
    Hiya, I followed this tutorial below http://code.google.com/apis/maps/articles/phpsqlajax_v3.html#outputxml I ran into trouble, near then end, I am hoping someone else here has got this working and can help me discover my problem. Simply there are 4 steps to this tutorial Creating the Table Populating the Table Outputting XML with PHP Creating the Map I successfully have completed all the steps, however the outputted xml isn't read by the google map I created. The files are all on the same directory, and I didn't change any of the file names from the tutorial. The tutorial has a step to test if the php file called phpsqlajax_genxml.php is outputting the xml and I successfully tested it and it was. The problem is that the map isn't rendering the items I have in the database, that should be converted to xml for the map to read. Any help, or pointing me in the right direction would be much appreciated.

    Read the article

  • [jQuery] [PHP] Image manipulation

    - by robertdd
    hello, I want to do some kind of image editor, after I upload more images i want to make a list with all the thumbnails! after i want to be able to click on one thumb and rotate, duplicate, drag and drop (to change positions of the images), delete the image! all the images i want to be in a php array, if a image is deleted i want to delete the row from array to, if a image is drag and droped i want to change the position in the array to! ok after the user upload all the images and modify some of it how i can make a DONE button to save the positions of the images? for this small project how u suggest me to save the images? (to make a table in mysql and store the names of the images in the database depending on the session id? depending on the IP? any suggestions are welcome! thanks!

    Read the article

  • Is is possible to to have a depends on a jQuery remote validation?

    - by David Kethel
    I am using jQuery remote validation to check if the description is already being used. Description: { required: true, maxlength: 20, remote: function () { var newDescription = $("#txtDescription").val(); var dataInput = { geoFenceDescription: newDescription }; var r = { type: "POST", url: "/ATOMWebService.svc/DoesGeoFenceDescriptionExist", data: JSON.stringify(dataInput), contentType: "application/json; charset=utf-8", dataType: "json", dataFilter: function (data) { var x = (JSON.parse(data)).d; return JSON.stringify(!x); } }; return r; } }, The problem I have is that this remote validation occurs when the user has NOT modified the text box and comes back saying the description has been used because it found it self in the database. So is it possible to only run the remote validation if the text field is different to what was originally in it? I noticed the the jQuery required validation has a depends option, but I couldn't get it to work with the remote call.

    Read the article

  • Rebol MSAccess ODBC: works with DNS connection but not with DNSLess Connection

    - by Rebol Tutorial
    I have tested the new free Rebol ODBC with MS Access after reading the doc here http://www.rebol.com/docs/database.html It works with ODBC DNS connection but when I tested with this DNSLess connection (MSAccess2003 file with MSAccess2007 installed): connect-name: open [ scheme: 'odbc target: join "{DRIVER=Microsoft Access Driver (*.mdb)}; " "DBQ=c:\test\test.mdb" ] It shows this error: >> connect-name: open [ [ scheme: 'odbc [ target: join "{DRIVER=Microsoft Access Driver (*.mdb)}; " [ "DBQ=c:\test\test.mdb" [ ] ** Access Error: Invalid port spec: scheme odbc target join {DRIVER=Microsoft Access Driver (*.mdb)}; DBQ=c:\test\test.mdb ** Near: connect-name: open [ scheme: 'odbc target: join "{DRIVER=Microsoft Access Driver (*.mdb)}; " "DBQ=c:\test\... >> >> Do you know why ? Thanks.

    Read the article

  • ASP.net custom error page

    - by c11ada
    hey all, im trying to implement a custom error page, what i want to be able to do is have a single generic error page which can display the error which occurred or other information (custom error message). when a error occurs on the website, the user should be directed to this page which shows the error message. so for example if i had a page which was trying to update something to a database, but something went wrong, i should be redirected to the error page which will have some custom text like something like " there has been an error with bla bla bla ... please contact administrator". hope this makes sense thanks

    Read the article

  • How can I add GET variables to the end of the current page url using a form with php?

    - by zeckdude
    I have some database information that is being shown on a page. I am using a pagination class that uses the $_GET['page'] variable in the url. When you click on a different pagination anchor tag, it changes $_GET['page'] to a new number in the url and shows the corresponding results. I have sort and search features which uses the $_GET['searchby'] and $_GET['search_input'] variables. The user enters their search or sort criteria on a form that is using GET. The variables are then put into the url allowing for the correct results to be shown. The problem I am having is that whenever I click on a pagination link, it adds that to end of the url and erases the search or sort GET variables. The same thing happens when I submit the search/sort form. How can I add GET variables to the end of the current page url using the anchor tag and search/sort form?

    Read the article

  • Common way to compare timestamp in oracle, postgres and mssql

    - by Pratik
    Hi There! I am writing a sql query which involves finding if timestamp falls in particular range of days . I have written that in the postgres but it doesn't works in oracle and msssql. Is there are common way to compare the timestamp across different database. My postgres sql looks something like this ... AND creation_date < (CURRENT_TIMESTAMP - interval '5 days') AND creation_date >= (CURRENT_TIMESTAMP - interval '15 days') ... Thanks! Pratik

    Read the article

  • SSIS- Sharepoint list data transfer issue

    - by Vicky
    Hi , We are trying to transfer data from oracle database (about 60,0000) records only to a sharepoint list using SSIS. But we are getting following error when records reaches around 19000 . The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020 and System.ServiceModel.ProtocolException: The remote server returned an unexpected response: (400) Bad Request. Earlier we thought if could because of Sharepoint list limit so we tried by reducing two of the columns and then it has went fine. So we left with one of the column of Datatype DT_STR and length 400 in oracle beacuse of which issue might be happening, It is mapped to sharepoint custom list field of multiline type. We also verified if length of field is issue but in oracle DB for all records max length for this column is only 239 so length issue is also ruled out. Any one who has faced this kind of issue or knows cause of this issue.Kindly let us know.. Thanks and regards, Vicky

    Read the article

  • Django Auth Model Issue - AUTH_USER_MODEL Not Installed

    - by Ian Warner
    Trying to debug this error with getting a Django project running ImproperlyConfigured: AUTH_USER_MODEL refers to model 'accounts.User' that has not been installed Running python manage.py migrate Must iterate i am in no way a python or django expert - I have simply inherited someone elses project that I am trying to get running for the team here. I have followed steps to install postgres required modules including south creating database for postgres Any help appreciated on how to debug this. settings/base.py contains INSTALLED_APPS = DJANGO_APPS + THIRD_PARTY_APPS + LOCAL_APPS LOCAL_APPS = ( 'apps.core', 'apps.accounts', 'apps.project_tool', 'apps.internal', 'apps.external', ) so apps.accounts exits - but it asks for AUTH_USER_MODEL = 'accounts.User' - should it be AUTH_USER_MODEL = 'apps.accounts.User'?

    Read the article

  • Linq to SQL and SQL Server Compact Error: "There was an error parsing the query."

    - by Jeremy
    I created a SQL server compact database (MyDatabase.sdf), and populated it with some data. I then ran SQLMetal.exe and generated a linq to sql class (MyDatabase.mdf) Now I'm trying to select all records from a table with a relatively straightforward select, and I get the error: "There was an error parsing the query. [ Token line number = 3,Token line offset = 67,Token in error = MAX]" Here is my select code: public IEnumerable ListItems() { MyDatabase db_m = new MyDatabase("c:\mydatabase.sdf"); return this.db_m.TestTable.Select(test = new Item() { .... } } I've read that Linq to SQL works with Sql Compact, is there some other configuration I need to do?

    Read the article

  • Magic quotes in PHP

    - by VirtuosiMedia
    According to the PHP manual, in order to make code more portable, they recommend using something like the following for escaping data: if (!get_magic_quotes_gpc()) { $lastname = addslashes($_POST['lastname']); } else { $lastname = $_POST['lastname']; } I have other validation checks that I will be performing, but how secure is the above strictly in terms of escaping data? I also saw that magic quotes will be deprecated in PHP 6. How will that affect the above code? I would prefer not to have to rely on a database-specific escaping function like mysql_real_escape_string().

    Read the article

  • Combining a one-to-one relationship into one object in Fluent NHibernate

    - by Mike C.
    I have a one-to-one relationship in my database, and I'd like to just combine that into one object in Fluent NHibernate. The specific tables I am talking about are the aspnet_Users and aspnet_Membership tables from the default ASP.NET Membership implementation. I'd like to combine those into one simple User object and only get the fields I want. I would also like to make this read-only, as I want to use the built-in ASP.NET Membership API to modify. I simply want to take advantage of lazy-loading. Any help would be appreciated. Thanks!

    Read the article

  • SQL 2000 Not Supported by .NET Framework Data Provider for SQL Server in VS2010's Server Explorer D

    - by Canoehead
    Just tried creating a data connection to a SQL 2000 database in VS2010's Server Explorer using a .NET Framework Data Provider for SQL Server (versus OLE) and found that it didn't work. VS2010 complained that I had to use SQL Server 2005 and up. This used to work in VS2008 (using .NET Framework Data Provider for SQL Server instead of the .NET Framework Data Provider for OLE DB). Is this just a VS2010 restriction or has the ability to connect to SQL 2000 with .NET Framework Data Provider for SQL Server been obsoleted in a post-2.0 version of .NET being used by VS2010? Anyone know why this was done by MS (please don't speculate - I can do that myself ;)?

    Read the article

  • best way to reference business objects from presentation layer..?

    - by Vytas999
    I want to develop an enterprise app that includes a WindowsForms presentation layer, middle-tier components for business logic and data access, and a MsSQL Server database. Middle-tier components should contain some business objects and will be called from presentation layer using .NET Remoting. Whitch is the best way (and why) to reference these business objects from presentation layer? a) Create class library project, implementing business objects. Reference this project from presentation layer and middle-tier layer. b) Create interface library project defining business objects. Create class library project implementing interfaces. Reference class library project from middle-tier layer. Reference interface library project from presentation layer. c) Create separate class library projects for middle-tier and presentation layer. Reference corresponding project from presentation layer.

    Read the article

  • c# gridview row click

    - by Martijn
    When i click on a row in my gridview, i want to go to a other page with the id i get from the database. In my RowCreated event i have the following line: e.Row.Attributes.Add("onClick", ClientScript.GetPostBackClientHyperlink(this.grdSearchResults, "Select$" + e.Row.RowIndex)); To prevent error messages i have this code: protected override void Render(HtmlTextWriter writer) { // .NET will refuse to accept "unknown" postbacks for security reasons. Because of this we have to register all possible callbacks // This must be done in Render, hence the override for (int i = 0; i < grdSearchResults.Rows.Count; i++) { Page.ClientScript.RegisterForEventValidation(new System.Web.UI.PostBackOptions(grdSearchResults, "Select$" + i.ToString())); } // Do the standard rendering stuff base.Render(writer); } My question is, how can i give a row a unique id (from the DB) and when i click the row, another page is opened (like clicking on a href) and that page can read the id. Thnx

    Read the article

< Previous Page | 977 978 979 980 981 982 983 984 985 986 987 988  | Next Page >