Search Results

Search found 8 results on 1 pages for 'jenkz'.

Page 1/1 | 1 

  • MySQL search for user and their roles

    - by Jenkz
    I am re-writing the SQL which lets a user search for any other user on our site and also shows their roles. An an example, roles can be "Writer", "Editor", "Publisher". Each role links a User to a Publication. Users can take multiple roles within multiple publications. Example table setup: "users" : user_id, firstname, lastname "publications" : publication_id, name "link_writers" : user_id, publication_id "link_editors" : user_id, publication_id Current psuedo SQL: SELECT * FROM ( (SELECT user_id FROM users WHERE firstname LIKE '%Jenkz%') UNION (SELECT user_id FROM users WHERE lastname LIKE '%Jenkz%') ) AS dt JOIN (ROLES STATEMENT) AS roles ON roles.user_id = dt.user_id At the moment my roles statement is: SELECT dt2.user_id, dt2.publication_id, dt.role FROM ( (SELECT 'writer' AS role, link_writers.user_id, link_writers.publication_id FROM link_writers) UNION (SELECT 'editor' AS role, link_editors.user_id, link_editors.publication_id FROM link_editors) ) AS dt2 The reason for wrapping the roles statement in UNION clauses is that some roles are more complex and require a table join to find the publication_id and user_id. As an example "publishers" might be linked accross two tables "link_publishers": user_id, publisher_group_id "link_publisher_groups": publisher_group_id, publication_id So in that instance, the query forming part of my UNION would be: SELECT 'publisher' AS role, link_publishers.user_id, link_publisher_groups.publication_id FROM link_publishers JOIN link_publisher_groups ON lpg.group_id = lp.group_id I'm pretty confident that my table setup is good (I was warned off the one-table-for-all system when researching the layout). My problem is that there are now 100,000 rows in the users table and upto 70,000 rows in each of the link tables. Initial lookup in the users table is fast, but the joining really slows things down. How can I only join on the relevant roles? -------------------------- EDIT ---------------------------------- Explain above (open in a new window to see full resolution). The bottom bit in red, is the "WHERE firstname LIKE '%Jenkz%'" the third row searches WHERE CONCAT(firstname, ' ', lastname) LIKE '%Jenkz%'. Hence the large row count, but I think this is unavoidable, unless there is a way to put an index accross concatenated fields? The green bit at the top just shows the total rows scanned from the ROLES STATEMENT. You can then see each individual UNION clause (#6 - #12) which all show a large number of rows. Some of the indexes are normal, some are unique. It seems that MySQL isn't optimizing to use the dt.user_id as a comparison for the internal of the UNION statements. Is there any way to force this behaviour? Please note that my real setup is not publications and writers but "webmasters", "players", "teams" etc.

    Read the article

  • Pushing updates to live server... FTP isn't cutting it... a better method?

    - by Jenkz
    I'm the lead developer in a team of 2. My partner has only just joined the project and despite using GIT for version control etc, we are still stuck in the dark ages when it comes to code deployment. Currently I make all site updates via FTP (this way I have control / responsibility over everything that goes live), using Filezilla. I've done this for years, but we now have some large PHP classes (300KB), and a lot of traffic. So in short, every time I upload a key class "general" for example, the site goes down until the file finishes uploading. This is only 5/6 seconds at a time, but this is increasingly unacceptable. I realise I can upload the file under a different name and then rename both files... but really there must be a better way? I've heard about rsyncing code across from another server, but I don't see how this prevents switching to the new file whilst uploading. We only have one server (for DB and Apache) but also use some cloud servers (for openx as an example).

    Read the article

  • Which is faster? 4x10k SAS Drives in RAID 10 or 3x15k SAS Drives in RAID 5?

    - by Jenkz
    I am reviewing quote for a server upgrade. (RHEL). The server will have both Apache and MySQL on it, but the reason for upgrade is to increase DB performance. CPU has been upgraded massively, but I know that disk speed is also a factor. So RAID 10 is faster performance than RAID 5, but how much difference does the drive speed make? (The 15k discs in the RAID 5 config is at the top of my budget btw, hence not considdering 4x15k discs in RAID 10, which I assume would be the optimum.)

    Read the article

  • How to grow from single server setup

    - by Jenkz
    I'm looking for resources on how to grow our server setup. We currently have one dedicated server with Rackspace in the UK of the following spec: HPDL385_G2_PrevGen HP Single Dual Core Opteron 2214 (2.2Ghz) 4GB RAM 2x 10,000 SCSI Drives in RAID 1 Our traffic is up to 550,000 UVs per month. The site runs off a PHP and MySQL setup. The database gets an absolute hammering, we have many complex queries joining multilpe tables. We are using APC for PHP caching. I'm getting to the stage where I've done as much DB and query optimisation as I can and wonder what the next step should be...... I've looked at memcache, but I've got the impression that his requires a large amount of RAM and ideally a dedicated box.... So is the next step to have two boxes; one for database, one for Apache? Or is there a step I've overlooked. Our load is usually around the 2 mark, but right now it's up at 20!

    Read the article

  • How to grow from single server setup

    - by Jenkz
    I'm looking for resources on how to grow our server setup. We currently have one dedicated server with Rackspace in the UK of the following spec: HPDL385_G2_PrevGen HP Single Dual Core Opteron 2214 (2.2Ghz) 4GB RAM 2x 10,000 SCSI Drives in RAID 1 Our traffic is up to 550,000 UVs per month. The site runs off a PHP and MySQL setup. The database gets an absolute hammering, we have many complex queries joining multilpe tables. We are using APC for PHP caching. I'm getting to the stage where I've done as much DB and query optimisation as I can and wonder what the next step should be...... I've looked at memcache, but I've got the impression that his requires a large amount of RAM and ideally a dedicated box.... So is the next step to have two boxes; one for database, one for Apache? Or is there a step I've overlooked. Our load is usually around the 2 mark, but right now it's up at 20!

    Read the article

  • PHP: Set max_file_uploads for one file rather than php.ini

    - by Jenkz
    Like many variables in PHP using ini_set() on a page doesn't actually work. I've recently upgraded my PHP version and found that my multiple image uploader is now capped. After 3 hours of frustration, I've found that my new PHP install has the new "max_file_uploads" parameter set to "20". So only the first 7 images get uploaded (each is in three sizes, 7*3=21). I can now change my php.ini value of "max_file_uploads" to 300, but I'd rather not do that side wide. Is there any way to set that value just for a single file (upload.php)? Could a .htaccess file be used for this?

    Read the article

  • Basic site analytics doesn't tally with Google data

    - by Jenkz
    After being stumped by an earlier quesiton: SO google-analytics-domain-data-without-filtering I've been experimenting with a very basic analytics system of my own. MySQL table: hit_id, subsite_id, timestamp, ip, url The subsite_id let's me drill down to a folder (as explained in the previous question). I can now get the following metrics: Page Views - Grouped by subsite_id and date Unique Page Views - Grouped by subsite_id, date, url, IP (not nesecarily how Google does it!) The usual "most visited page", "likely time to visit" etc etc. I've now compared my data to that in Google Analytics and found that Google has lower values each metric. Ie, my own setup is counting more hits than Google. So I've started discounting IP's from various web crawlers, Google, Yahoo & Dotbot so far. Short Questions: Is it worth me collating a list of all major crawlers to discount, is any list likely to change regularly? Are there any other obvious filters that Google will be applying to GA data? What other data would you collect that might be of use further down the line? What variables does Google use to work out entrance search keywords to a site? The data is only going to used internally for our own "subsite ranking system", but I would like to show my users some basic data (page views, most popular pages etc) for their reference.

    Read the article

  • Cross domain login - what to store in the database?

    - by Jenkz
    I'm working on a system which will allow me to login to the same system via various domains. (www.example.com, www.mydomain.com, sub.domain.com etc) The following threads form the basis of my research so far: Single Sign On across multiple domains Cross web domain login with .net membership What I want to happen is that If I am logged in on the master domain and I visit a page on a client domain to be automatically logged in on the client. Obviously If I am not logged in on the master, I will need to enter my username and password. Walkthrough: 1. User logs in on master site 2. User navigates to client site 3. Client site re-directs to master site to see if User is logged in. 4. If User is logged in on master, record a RFC 4122 token ID and send this back to the client site. 5. Client site then looks up the token ID in the central database and logs this user in. This might eventually end up running on more than once instance of PHP and Apache, so I can't just store: token_id, php_session_id, created Is there any problem with me storing and using this: token_id, username, hashed_password, created Which is deleted on use, or automatically after x seconds.

    Read the article

1