Search Results

Search found 53054 results on 2123 pages for 'sql sample database'.

Page 644/2123 | < Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >

  • How does the data storage work? [closed]

    - by Andres Adhi
    I am really new to the whole concept of Data storage, Domain, Server and everything else related to this. Can someone pleases explain what a Domain is? How are server part of the Domain and How are Database stored in the Server or Domain? How does a new server be able to connect to existing database server to get all the data needed. I tried to find this information in the web but I am not really finding a good resource. It may be because these is really basic information. I will really appreciate if someone can explain these concept in plain terms. Thanks in advance.

    Read the article

  • A couple of questions about NHibernate's GuidCombGenerator

    - by Eyvind
    The following code can be found in the NHibernate.Id.GuidCombGenerator class. The algorithm creates sequential (comb) guids based on combining a "random" guid with a DateTime. I have a couple of questions related to the lines that I have marked with *1) and *2) below: private Guid GenerateComb() { byte[] guidArray = Guid.NewGuid().ToByteArray(); // *1) DateTime baseDate = new DateTime(1900, 1, 1); DateTime now = DateTime.Now; // Get the days and milliseconds which will be used to build the byte string TimeSpan days = new TimeSpan(now.Ticks - baseDate.Ticks); TimeSpan msecs = now.TimeOfDay; // *2) // Convert to a byte array // Note that SQL Server is accurate to 1/300th of a millisecond so we divide by 3.333333 byte[] daysArray = BitConverter.GetBytes(days.Days); byte[] msecsArray = BitConverter.GetBytes((long) (msecs.TotalMilliseconds / 3.333333)); // Reverse the bytes to match SQL Servers ordering Array.Reverse(daysArray); Array.Reverse(msecsArray); // Copy the bytes into the guid Array.Copy(daysArray, daysArray.Length - 2, guidArray, guidArray.Length - 6, 2); Array.Copy(msecsArray, msecsArray.Length - 4, guidArray, guidArray.Length - 4, 4); return new Guid(guidArray); } First of all, for *1), wouldn't it be better to have a more recent date as the baseDate, e.g. 2000-01-01, so as to make room for more values in the future? Regarding *2), why would we care about the accuracy for DateTimes in SQL Server, when we only are interested in the bytes of the datetime anyway, and never intend to store the value in an SQL Server datetime field? Wouldn't it be better to use all the accuracy available from DateTime.Now?

    Read the article

  • Carrierwave upload to a tmp dir before saving to database

    - by user827570
    I'm trying to build a visual editor where users can click an image they are presented with an image upload form once the upload is done I use ajax to return the image and insert it back into the page. But the above method inserts the image straight into the database but I want users to be able to visualize the image before the image is inserted into the database. So I was wondering if the image using carrierwave could be uploaded to a temp location, sent back to the user and then when the user saves the page the image is moved into the permanent location. Here's what I have so far. def edit_image @page = Page.find(1) @page.update_attributes(params[:page]) @page.save return :text => @page.file end But this is what I want to achieve def temp_image #uploads received image to a temp location #returns image to the user end And once the user clicks save def save #moves the file in the temp folder to the permanent location end Cheers

    Read the article

  • Datagridview error

    - by Simon
    I have two datagridviews. So for the second one, i just copy-pasted the code from the first and changed where the difference was. But i get an error at the secod data grid when i want to view the result of my sql code. Translated in english the error show something like that there was no value given to at least one required parameter. Please help! private void button1_Click(object sender, EventArgs e) { string connectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=save.mdb"; try { database = new OleDbConnection(connectionString); database.Open(); date = DateTime.Now.ToShortDateString(); string queryString = "SELECT zivila.naziv,(obroki_save.skupaj_kalorij/zivila.kalorij)*100 as Kolicina_v_gramih " + "FROM (users LEFT JOIN obroki_save ON obroki_save.ID_uporabnika=users.ID)" + " LEFT JOIN zivila ON zivila.ID=obroki_save.ID_zivila " + " WHERE users.ID= " + a.ToString(); loadDataGrid(queryString); } catch (Exception ex) { MessageBox.Show(ex.Message); return; } } public void loadDataGrid(string sqlQueryString) { OleDbCommand SQLQuery = new OleDbCommand(); DataTable data = null; dataGridView1.DataSource = null; SQLQuery.Connection = null; OleDbDataAdapter dataAdapter = null; dataGridView1.Columns.Clear(); // <-- clear columns SQLQuery.CommandText = sqlQueryString; SQLQuery.Connection = database; data = new DataTable(); dataAdapter = new OleDbDataAdapter(SQLQuery); dataAdapter.Fill(data); dataGridView1.DataSource = data; dataGridView1.AllowUserToAddRows = false; dataGridView1.ReadOnly = true; dataGridView1.Columns[0].Visible = true; } private void Form8_Load(object sender, EventArgs e) { } private void button2_Click(object sender, EventArgs e) { string connectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=save.mdb"; try { database = new OleDbConnection(connectionString); database.Open(); date = DateTime.Now.ToShortDateString(); string queryString = "SELECT skupaj_kalorij " + "FROM obroki_save " + " WHERE users.ID= " + a.ToString(); loadDataGrid2(queryString); } catch (Exception ex) { MessageBox.Show(ex.Message); return; } } public void loadDataGrid2(string sqlQueryString) { OleDbCommand SQLQuery = new OleDbCommand(); DataTable data = null; dataGridView2.DataSource = null; SQLQuery.Connection = null; OleDbDataAdapter dataAdapter = null; dataGridView2.Columns.Clear(); // <-- clear columns SQLQuery.CommandText = sqlQueryString; SQLQuery.Connection = database; data = new DataTable(); dataAdapter = new OleDbDataAdapter(SQLQuery); dataAdapter.Fill(data); dataGridView2.DataSource = data; dataGridView2.AllowUserToAddRows = false; dataGridView2.ReadOnly = true; dataGridView2.Columns[0].Visible = true; }

    Read the article

  • Multiprogramming in Django, writing to the Database

    - by Marcus Whybrow
    Introduction I have the following code which checks to see if a similar model exists in the database, and if it does not it creates the new model: class BookProfile(): # ... def save(self, *args, **kwargs): uniqueConstraint = {'book_instance': self.book_instance, 'collection': self.collection} # Test for other objects with identical values profiles = BookProfile.objects.filter(Q(**uniqueConstraint) & ~Q(pk=self.pk)) # If none are found create the object, else fail. if len(profiles) == 0: super(BookProfile, self).save(*args, **kwargs) else: raise ValidationError('A Book Profile for that book instance in that collection already exists') I first build my constraints, then search for a model with those values which I am enforcing must be unique Q(**uniqueConstraint). In addition I ensure that if the save method is updating and not inserting, that we do not find this object when looking for other similar objects ~Q(pk=self.pk). I should mention that I ham implementing soft delete (with a modified objects manager which only shows non-deleted objects) which is why I must check for myself rather then relying on unique_together errors. Problem Right thats the introduction out of the way. My problem is that when multiple identical objects are saved in quick (or as near as simultaneous) succession, sometimes both get added even though the first being added should prevent the second. I have tested the code in the shell and it succeeds every time I run it. Thus my assumption is if say we have two objects being added Object A and Object B. Object A runs its check upon save() being called. Then the process saving Object B gets some time on the processor. Object B runs that same test, but Object A has not yet been added so Object B is added to the database. Then Object A regains control of the processor, and has allready run its test, even though identical Object B is in the database, it adds it regardless. My Thoughts The reason I fear multiprogramming could be involved is that each Object A and Object is being added through an API save view, so a request to the view is made for each save, thus not a single request with multiple sequential saves on objects. It might be the case that Apache is creating a process for each request, and thus causing the problems I think I am seeing. As you would expect, the problem only occurs sometimes, which is characteristic of multiprogramming or multiprocessing errors. If this is the case, is there a way to make the test and set parts of the save() method a critical section, so that a process switch cannot happen between the test and the set?

    Read the article

  • Must all Concurrent Data Store (CDB) locks be explicitly released when closing a Berkeley DB?

    - by Steve Emmerson
    I have an application that comprises multiple processes each accessing a single Berkeley DB Concurrent Data Store (CDB) database. Each process is single-threaded and does no explicit locking of the database. When each process terminates normally, it calls DB-close() and DB_ENV-close(). When all processes have terminated, there should be no locks on the database. Episodically, however, the database behaves as if some process was holding a write-lock on it even though all processes have terminated normally. Does each process need to explicitly release all locks before calling DB_ENV-close()? If so, how does the process obtain the "locker" parameter for the call to DB_ENV-loc_vec()?

    Read the article

  • Rails: How do I unserialize from database?

    - by Macint
    Hello, I am currently trying to save information for an invoice/bill. On the invoice I want to show what the total price is made up of. The procedures & items, their price and the qty. So in the end I hope to get it to look like this: Consult [date] [total_price] Procedure_name [price] [qty] Procedure_name [price] [qty] Consult [date] [total_price] Procedure_name [price] [qty] etc... All this information is available through the database but i want to save the information as a separate copy. That way if the user changes the price of some procedures the invoice information is still correct. I thought i'd do this by serializing and save the data to a column (consult_data) in the Invoice table. My Model: class Invoice < ActiveRecord::Base ...stuff... serialize :consult_data ... end This is what I get from the form (1 consult and 3 procedures): {"commit"=>"Save draft", "authenticity_token"=>"MZ1OiOCtj/BOu73eVVkolZBWoN8Fy1skHqKgih7Sbzw=", "id"=>"113", "consults"=>[{"consult_date"=>"2010-02-20", "consult_problem"=>"ABC", "procedures"=>[{"name"=>"asdasdasd", "price"=>"2.0", "qty"=>"1"}, {"name"=>"AAAnd another one", "price"=>"45.0", "qty"=>"4"}, {"name"=>"asdasdasd", "price"=>"2.0", "qty"=>"1"}], "consult_id"=>"1"}]} My save action: def add_to_invoice @invoice = @current_practice.invoices.find_by_id(params[:id]) @invoice.consult_data=params[:consults] if @invoice.save render :text => "I think it worked" else render :text => "I don't think it worked'" end end It does save to the database and if I look at the entry in the console I can see that it is all there: consult_data: "--- \n- !map:HashWithIndifferentAccess \n consult_da..." (---The question---) But I can't seam to get back my data. I tried defining a variable to the consult_data attribute and then doing "variable.consult_problem" or "variable[:consult_problem]" (also tried looping) but it only throws no-method-errors back at me. How do I unserialize the data from the database and turn it back into hash that i can use? Thank you very much for any help!

    Read the article

  • Changing the character encoding of a MySQL database

    - by Julien Genestoux
    Our whole application is now able to handle UTF-8 and it will be our choice in terms of encoding all across our architecture. The last step is to change the encoding of our MySQL databases. Of course, ALTER TABLE db_table CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci; should be able to convert each of the tables to the right UTF8 encoding, yet, is there anything else I should do? I believe that the my.cnf configuration file needs to be changed as well.

    Read the article

  • how to update tables' structures keeping current data

    - by Leon
    I have an c# application that uses tables from sqlserver 2008 database (runs on standalone pc with local sqlserver). Initially i install database on this pc with some initial data (there are some tables that application uses and the user doesn't touch). The question is - how can i upgrade this database after user created some new data without harming it (i continue developing and can add some new tables or stored procedures or add some columns to existing tables). Thanks in advance!

    Read the article

  • Replicating MySQL DB to development machine - bad idea?

    - by Joel
    I am considering replicating a production MySQL database to my development machine so I've always got current data. The production database is externally hosted. My development machine is behind an unreliable internet connection. It is entirely possible that the development machine could be disconnected from the internet for extended periods of time (hours). Would there be any adverse effect on the production database by doing this? (I don't strictly need live data - but it would be nice, and good excuse to dabble with replication. If the consensus is that this is a bad idea, I'll set up a daily job to import the previous night's backup into my development database)

    Read the article

  • Django database caching

    - by hekevintran
    I have a Django form that uses an integer field to lookup a model object by its primary key. The form has a save() method that uses the model object referred to by the integer field. The model's manager's get() method is called twice, once in the clean method and once in the save() method: class MyForm(forms.Form): id_a = fields.IntegerField() def clean_id_a(user_id): id_a = self.cleaned_data['id_a'] try: # here is the first call to get MyModel.objects.get(id=id_a) except User.DoesNotExist: raise ValidationError('Object does not exist') def save(self): id_a = self.cleaned_data['id_a'] # here is the second call to get my_model_object = MyModel.objects.get(id=id_a) # do other stuff I wasn't sure whether this hits the database two times or one time so I returned the object itself in the clean method so that I could avoid a second get() call. Does calling get() hit the database two times? Or is the object cached in the thread? class MyForm(forms.Form): id_a = fields.IntegerField() def clean_id_a(user_id): id_a = self.cleaned_data['id_a'] try: # here is my workaround return MyModel.objects.get(id=id_a) except User.DoesNotExist: raise ValidationError('Object does not exist') def save(self): # looking up the cleaned value returns the model object my_model_object = self.cleaned_data['id_a'] # do other stuff

    Read the article

  • Capistrano 3, Rails 4, database configuration does not specify adapter

    - by Kazmin
    When I start cap production deploy it fails like this: DEBUG [4ee8fa7a] Command: cd /home/deploy/myapp/releases/releases/20131025212110 && (RVM_BIN_PATH=~/.rvm/bin RAILS_ENV= ~/.rvm/bin/myapp_rake assets:precompile ) DEBUG [4ee8fa7a] rake aborted! DEBUG [4ee8fa7a] database configuration does not specify adapter You can see that "RAILS_ENV=" is actually empty and I'm wondering why that might be happening? I assume that this is the reason for the latter error that I don't have a database configuration. The deploy.rb file is below: set :application, 'myapp' set :repo_url, '[email protected]:developer/myapp.git' set :branch, :master set :deploy_to, '/home/deploy/myapp/releases' set :scm, :git set :devpath, "/home/deploy/myapp_development" set :user, "deploy" set :use_sudo, false set :default_env, { rvm_bin_path: '~/.rvm/bin' } set :keep_releases, 5 namespace :deploy do desc 'Restart application' task :restart do on roles(:app), in: :sequence, wait: 5 do # Your restart mechanism here, for example: within release_path do execute " bundle exec thin restart -O -C config/thin/production.yml" end end end after :restart, :clear_cache do on roles(:web), in: :groups, limit: 3, wait: 10 do within release_path do end end end after :finishing, 'deploy:cleanup' end?

    Read the article

  • Password reset by email without a database table

    - by jpatokal
    The normal flow for resetting a user's password by mail is this: Generate a random string and store it in a database table Email string to user User clicks on link containing string String is validated against database; if it matches, user's pw is reset However, maintaining a table and expiring old strings etc seems like a bit of an unnecessary hassle. Are there any obvious flaws in this alternative approach? Generate a MD5 hash of the user's existing password Email hash string to user User clicks on link containing string String is validated by hashing existing pw again; if it matches, user's pw is reset Note that the user's password is already stored in a hashed and salted form, and I'm just hashing it once more to get a unique but repeatable string. And yes, there is one obvious "flaw": the reset link thus generated will not expire until the user changes their password (clicks the link). I don't really see why this would be a problem though -- if the mailbox is compromised, the user is screwed anyway.

    Read the article

  • Can not find Driver when using generic database bundle

    - by Marc
    I have a project that is build up from several OSGi bundles. One of them is a generic Database bundle that defines a DataSource that can be used throughout the project. The spring bean definition of this service is: <osgi:service interface="javax.sql.DataSource"> <bean class="org.postgresql.ds.PGPoolingDataSource"> <property name="databaseName" value="xxx" /> <property name="serverName" value="xxx" /> <property name="user" value="xxx" /> <property name="password" value="xxx" /> </bean> </osgi:service> Now, when using this DataSource is a different bundle, we get an error: No suitable driver found for jdbc:postgresql://localhost/xxx I have tried the following to add the org.postgresql.Driver to the DriverManager: Instantiated an empty bean for that Driver in the spring context, like this: <bean class="org.postgresql.Driver" /> Instantiated the Driver statically in one of the classes, like this: Class.forName("org.postgresql.Driver"); Added a file META-INF\services\java.sql.Driver with the content org.postgresql.Driver None of these solutions seems to help.

    Read the article

  • I need a program to store the database script for oracle

    - by Hakan Kara
    We are developing a project that has 3 enviroments (development, test, production) So there are 3 databases (actually more than 3, because we have 5 customers so we have more than 10 databases) and they must be synchronised. There are 30 coders working for this project. Everone adds, deletes, and changes procedures, table columns etc. We need a program to store our database scripts like visual studio's team foundation server. See the change history of script file. Everyone must access that program and be able to put their scripts. Recover previous versions of script file. Execute these scripts over a selected database. Compare databases by procedures (not only by name, by content of procedure), functions, table columns, packages etc. I am searching a program like that. Which one do you suggest me?

    Read the article

  • Which is better for multi-use auth, MySQL, PostgreSQL, or LDAP?

    - by Fearless
    I want to set up an Oracle Linux 6 server that gives users secure IMAP email (with dovecot), Jabber IM, FTP (with vsftpd), and calDav. However, I want each user logon to be able to authenticate all services (e.g. Joe Smith signs up once for a username and password that he can use for email, ftp, and his calendar). My question is, which database service will be best suited for that application? Also, is there a way to link the database with the preexisting server shell logins (e.g. so I can read the root account's LogCheck emails on a different device)?

    Read the article

  • Foreign/accented characters in sql query

    - by FromCanada
    I'm using Java and Spring's JdbcTemplate class to build an SQL query in Java that queries a Postgres database. However, I'm having trouble executing queries that contain foreign/accented characters. For example the (trimmed) code: JdbcTemplate select = new JdbcTemplate( postgresDatabase ); String query = "SELECT id FROM province WHERE name = 'Ontario';"; Integer id = select.queryForObject( query, Integer.class ); will retrieve the province id, but if instead I did name = 'Québec' then the query fails to return any results (this value is in the database so the problem isn't that it's missing). I believe the source of the problem is that the database I am required to use has the default client encoding set to SQL_ASCII, which according to this prevents automatic character set conversions. (The Java environments encoding is set to 'UTF-8' while I'm told the database uses 'LATIN1' / 'ISO-8859-1') I was able to manually indicate the encoding when the resultSets contained values with foreign characters as a solution to a previous problem with a similar nature. Ex: String provinceName = new String ( resultSet.getBytes( "name" ), "ISO-8859-1" ); But now that the foreign characters are part of the query itself this approach hasn't been successful. (I suppose since the query has to be saved in a String before being executed anyway, breaking it down into bytes and then changing the encoding only muddles the characters further.) Is there a way around this without having to change the properties of the database or reconstruct it? PostScript: I found this function on StackOverflow when making up a title, it didn't seem to work (I might not have used it correctly, but even if it did work it doesn't seem like it could be the best solution.):

    Read the article

  • MySQL ADO.NET Connector & MSSQL Integration Services

    - by user1114330
    Here I am, day three... attempting to sync a data view on a Windows Vista box (64 bit) running MSSQL 2012 and Visual Studio 2010. Sanity is slipping and hunger for progress fills my attention. I went through hell trying to get the MySQL ODBC drivers to get the job but to no avail...everyone seems to be lost and all the threads I can find are solutions that do not work for me. The problem: System DSN's not being seen by SSIS. SSIS DSN Not Showing as ODBC Data Source I make the decision to try out the ADO.NET connector...and to my surprise it is actually in the selection list in data sources in SSIS. So I take off running to create a Data Flow Task, create an ADO.NET Source (a local MSSQL DB)...all is good as usual. Then I move swiftly to creating a ADO.NET Destination, enter my credentials...wow, I am selecting a database finally on my linux server! Happy thinking that I finally have figured a way to get the job done. Then I move to mappings...nope, something is wrong...I am getting an error that hurts my eyes: Pipeline component has returned HRESULT error code 0xC0208457 from a a method call. Error at Data Flow Task [ADO NET Destination [81]]: Failed to get properties of external columns. The table name you entered may not exist or you do not have SELECT permission on the table object and an alternative attempt to get column properties through connection has failed. Detailed error messages are" You have an error in your SQL syntax check the manual that corresponds to your MySQL server version for the right syntax to use near "database".tablename" at line 1. The descriptor files on path C:\Program Files (x86)\Microsoft SQL Server\110\DTS\ProviderDescriptors\ does not contain schema information for connection of type MySQL.Data.MySqlClient.MySqlConnection. So it looks like it can't the information and therefore I cannot map the tables properly. Any ideas on this would be ultra helpful...thanks in advance to All!

    Read the article

  • Searching Database by Arbitrary Date in PHP

    - by jverdi
    Suppose you have a messaging system built in PHP with a MySQL database backend, and you would like to support searching for messages using arbitrary date strings. The database includes a messages table, with a 'date_created' field represented as a datetime. Examples of the arbitrary date strings that would be accepted by the user should mirror those accepted by strtotime. For the following examples, searches performed on March 21, 2010: "January 26, 2009" would return all messages between 2009-01-26 00:00:00 and 2009-01-27 00:00:00 "March 8" would return all messages between 2010-03-08 00:00:00 and 2010-01-26 00:00:00 "Last week" would return all messages between 2010-03-14 00:00:00 and 2010-03-21 018:25:00 "2008" would return all messages between 2008-01-01 00:00:00 and 2008-12-31 00:00:00 I began working with date_parse, but the number of variables grew quickly. I wonder if I am re-inventing the wheel. Does anyone have a suggestion that would work either as a general solution or one that would capture most of the possible input strings?

    Read the article

  • Load Balancing and Failover for Read-Only PostgreSQL Database

    - by Eric J.
    Scenario Multiple application servers host web services written in Java, running in SpringSource dm Server. To implement a new requirement, they will need to query a read-only PostgreSQL database. Issue To support redundancy, at least two PostgreSQL instances will be running. Access to PostgreSQL must be load balanced and must auto-fail over to currently running instances if an instance should go down. Auto-discovery of newly running instances is desirable but not required. Research I have reviewed the official PostgreSQL documentation on this issue. However, that focuses on the more general case of read/write access to the database. Top google results tend to lead to older newsgroup messages or dead projects such as Sequoia or DB Balancer, as well as one active project PG Pool II Question What are your real-world experiences with PG Pool II? What other simple and reliable alternatives are available?

    Read the article

  • Intermittent "No Database Selected" in PHP/MySQL?

    - by ANE
    Have a PHP/MySQL form with a dropdown box containing a list of 350 names. When any random name is selected, sometimes it works & displays info about that name from the database, and sometimes the form gives the error "No Database Selected". Here's what I've tried, pretty much grasping at straws as I'm not a programmer: Increasing max_connections in /etc/my.cnf from 200 to 2000 (even though only 4-5 connections are made and it's a lightly used server) Changing mysql_pconnect to mysql_connect Adding the word true to this connection string: $mysql = mysql_pconnect($hostname_mysql, $username_mysql, $password_mysql, true) or trigger_error(mysql_error(),E_USER_ERROR); Changing the word require_once to require on this line: [?php require('/home/user/Connections/mysql.php'); ?] Enabling MySQL & PHP query & error logging. (no errors logged) Here is the code: [removed old bad code] Update: Working answer from Rob Apodaca below.

    Read the article

  • Best way to save complex Python data structures across program sessions (pickle, json, xml, database

    - by Malcolm
    Looking for advice on the best technique for saving complex Python data structures across program sessions. Here's a list of techniques I've come up with so far: pickle/cpickle json jsonpickle xml database (like SQLite) Pickle is the easiest and fastest technique, but my understanding is that there is no guarantee that pickle output will work across various versions of Python 2.x/3.x or across 32 and 64 bit implementations of Python. Json only works for simple data structures. Jsonpickle seems to correct this AND seems to be written to work across different versions of Python. Serializing to XML or to a database is possible, but represents extra effort since we would have to do the serialization ourselves manually. Thank you, Malcolm

    Read the article

  • Static Website - Converting to Dynamic, need to import information from database on different host

    - by gvernold
    This seems really complicated to ask about so I hope someone can help: We have a long time running static website held with a hosting company that provide PHP, Ruby-on-Rails and Drupal/Joomla support. A little limited I know but we got reasonably decent search engine rankings and didn't want them to drop. We have two much more recently created sites on another host written in Python/Django. The original site is now too big to handle statically and we want to create a more dynamic site in its place without changing servers/webhosts. The data we want to provide the 'new' dynamic site is from the same database providing the Django sites. What is the best solution to build the new site with? Is it better to create PHP pages that connect to the database on the other host? Ruby-on-rails seems like a very fast development environment not too dissimilar to Django, would we be able to fetch data from the existing databases into a rails site and use similar urls to our old static pages?

    Read the article

< Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >