Search Results

Search found 49554 results on 1983 pages for 'database users'.

Page 481/1983 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • consequences of changing uid/gid on snow leopard

    - by Peter Carrero
    ok, so I introduced a Mac laptop to my home network of Kubuntu hosts and Fedora servers. Currently I don't have NIS or LDAP setup (I got only 2 users) and I just manually setup the UID/GID on the hosts. I would like to run the following command on my Macbook: dscl . -change /Users/me UniqueID 501 1000 dscl . -change /Users/me PrimaryGroupID 20 503 chown -R 1000:503 /Users/me dscl . append /Groups/staff GroupMembership me Before I go on to hose my new Mac, I would like to know if this is the right thing to do and, if so, what are the adverse consequences I may have. Thanks.

    Read the article

  • nginx regex configuration for 404 images

    - by Muhammet Arslan
    I have dynamic link series like below; http://example.com/users/1871233/18712443_cover.jpg Only static thing is on that link is example.com/users and _cover.I want to make that ; when requested is not found return a jpg location ~ ^\/users\/(.*)\/(.*)_cover.*(jpg|jpeg|png|gif)$ { error_page 404 /deff_images/empty-cover-jpg.jpg; } I did smt like above but not worked . What can i do for that ? So thanks

    Read the article

  • enabling a user (created with adduser command) for lightdm graphical login

    - by Basile Starynkevitch
    I just installed Ubuntu 12.04 AMD64 on a new (empty) hard disk (because the previous crashed) Since I am quite familiar with Debian, I created two accounts with the adduser command. Since I am also having an NFSv3 file system, I explictly gave user ids when creating them (for simplicity, I keep the same user id on the home server, running Debian; the user names contain digits; I'm not using LDAP), e.g. # grep bethy /etc/passwd bethy46:x:501:501:Bethy XXX,,,06123456:/home/bethy:/bin/bash # grep bethy /etc/group bethy64:x:501: # grep bethy /etc/shadow bethy46:$6$vQ-wmuchmorethings-2o/:15479:0:99999:7:: Of course /home/bethy exists The actual user name is slightly different, and I am not showing the real entries (for obvious privacy reasons) However, these users don't appear at graphical login prompt (lightdm). And they exist in the system, they have entries in /etc/passwd & /etc/shadow and I (partly) restored their /home I've got no specific user config under /etc/lightdm ; file /etc/lightdm/users.conf mentions # NOTE: If you have AccountsService installed on your system, then LightDM # will use this instead and these settings will be ignored but I have no idea of how to deal with AccountsService thru the command line As you probably guessed, I really dislike doing administrative tasks thru a graphical interface; I much prefer the command line What did I do wrong? How can a user entry not appear in lightdm graphical login? (I need to have my wife's user entry apparent for graphical login). I am not asking how to hide a user, but how to show it in lightdm graphical prompt work-around As I have been told in comments by Nirmik and by Enzotib, lightdm probably don't show any users of uid less than 1024. So I changed all the uid to be more than 8200 (including on the Debian NFS server) and this made all the users visible at the graphical prompt. It is a pain that such a threshold is not really documented.

    Read the article

  • Which would be a better way to load data via ajax

    - by Mike
    I am using google maps and returning html/lat/long from my MySQL database Currently A user picks a business category e.g; "Video Production". an ajax call is sent to a CodeIgniter controller the Controller then queries the db, and returns the following data via JSON Lat/Long of the marker HTML for the popup window this is approximately 34 rows in the database across two tables per business the ajax call receives this data and then plots the marker along with the html onto the map The data that is returned from the controller is one big json object... This is done for all businesses that exist in the Video Production category (currently approx 40 businesses). As you can see, pulling this data for multiple categories (100s of businesses) can get very very taxing on the server. My question is Would it be more beneficial to modify the process flow as such: a user picks a business category e.g; "Video Production". an ajax call is sent to a CodeIgniter controller the controller then queries the database for the location base information lat/long level (used to change marker icon color) This would be a single row per business with several columns the ajax call receives this data and then plots the marker on the map when the user clicks a marker an ajax call is sent to a CodeIgniter Controller the controller queries the database for the HTML and additional data based on business_id and if not, what are some better suggestions to this problem? In summary this means rather than including the HTML and additional data along for each business, only submitting minimal location information and then re-query for that information when each business marker is clicked. Potential Downsides longer load times when a user clicks a marker icon more code?? more queries to the database

    Read the article

  • Simple jail for user with open-ssh

    - by Vikram
    Can I confine my users to their /home/%u directory using simply open-ssh configuration? I did the following from what I found on the Internet Stopped the server To the sshd_config file appended the following Match group sftpusers ChrootDirectory /home/%u X11Forwarding no AllowTcpForwarding no started the server FYI I have the users added to sftpusers group My users can still access entire file structure on my system Ubuntu Server 12.04 LTS with open-ssh installed

    Read the article

  • How to add wildcards to Linux Malware Detect ignore_paths

    - by Laurence Cope
    I am using Linux Malware Detect to scan and report on malware, but on a daily basis I receive alerts for malware in users emails (mainly spam folder). I do not want alerts for this, the spam folders are cleaned often, and the users may clean it also. I tried adding wildcards into /usr/local/maldetect/ignore_paths as follows but they are not ignored: /home/*/homes/*/Maildir /home/?/homes/?/Maildir Does anyone know how to exclude folders using wildcards, as it would not be practical to add the full path of every users mail directory. Thanks

    Read the article

  • PHP MYSQL loop to check if LicenseID Values are contained in mysql DB [closed]

    - by Jasper
    I have some troubles to find the right loop to check if some values are contained in mysql DB. I'm making a software and I want to add license ID. Each user has x keys to use. Now when the user start the client, it invokes a PHP page that check if the Key sent in the POST method is stored in DB or not. If that key isn't store than I need to check the number of his keys. If it's than X I'll ban him otherwise i add the new keys in the DB. I'm new with PHP and MYSQL. I wrote this code and I would know if I can improve it. <?php $user = POST METHOD $licenseID = POST METHOD $resultLic= mysql_query("SELECT id , idUser , idLicense FROM license WHERE idUser = '$user'") or die(mysql_error()); $resultNumber = mysql_num_rows($resultLic); $keyFound = '0'; // If keyfound is 1 the key is stored in DB while ($rows = mysql_fetch_array($resultLic,MYSQL_BOTH)) { //this loop check if the $licenseID is stored in DB or not for($i=0; $i< $resultNumber ; i++) { if($rows['idLicense'] === $licenseID) { //Just for the debug echo("License Found"); $keyFound = '1'; break; } //If key isn't in DB and there are less than 3 keys the new key will be store in DB if($keyfound == '0' && $resultNumber < 3) { mysql_query( Update users set ...Store $licenseID in Table) } // Else mean that the user want user another generated key (from the client) in the DB and i will be ban (It's wrote in TOS terms that they cant use the software on more than 3 different station) else { mysql_query( update users set ban ='1'.....etc ); } } ?> I know that this code seems really bad so i would know how i can improve it. Someone Could give me any advice? I choose to have 2 tables: users where all information about the users is, with fields id, username, password and another table license with fields id, idUsername, idLicense (the last one store license that the software generate)

    Read the article

  • Adding Column to a SQL Server Table

    - by Dinesh Asanka
    Adding a column to a table is  common task for  DBAs. You can add a column to a table which is a nullable column or which has default values. But are these two operations are similar internally and which method is optimal? Let us start this with an example. I created a database and a table using following script: USE master Go --Drop Database if exists IF EXISTS (SELECT 1 FROM SYS.databases WHERE name = 'AddColumn') DROP DATABASE AddColumn --Create the database CREATE DATABASE AddColumn GO USE AddColumn GO --Drop the table if exists IF EXISTS ( SELECT 1 FROM sys.tables WHERE Name = 'ExistingTable') DROP TABLE ExistingTable GO --Create the table CREATE TABLE ExistingTable (ID BIGINT IDENTITY(1,1) PRIMARY KEY CLUSTERED, DateTime1 DATETIME DEFAULT GETDATE(), DateTime2 DATETIME DEFAULT GETDATE(), DateTime3 DATETIME DEFAULT GETDATE(), DateTime4 DATETIME DEFAULT GETDATE(), Gendar CHAR(1) DEFAULT 'M', STATUS1 CHAR(1) DEFAULT 'Y' ) GO -- Insert 100,000 records with defaults records INSERT INTO ExistingTable DEFAULT VALUES GO 100000 Before adding a Column Before adding a column let us look at some of the details of the database. DBCC IND (AddColumn,ExistingTable,1) By running the above query, you will see 637 pages for the created table. Adding a Column You can add a column to the table with following statement. ALTER TABLE ExistingTable Add NewColumn INT NULL Above will add a column with a null value for the existing records. Alternatively you could add a column with default values. ALTER TABLE ExistingTable Add NewColumn INT NOT NULL DEFAULT 1 The above statement will add a column with a 1 value to the existing records. In the below table I measured the performance difference between above two statements. Parameter Nullable Column Default Value CPU 31 702 Duration 129 ms 6653 ms Reads 38 116,397 Writes 6 1329 Row Count 0 100000 If you look at the RowCount parameter, you can clearly see the difference. Though column is added in the first case, none of the rows are affected while in the second case all the rows are updated. That is the reason, why it has taken more duration and CPU to add column with Default value. We can verify this by several methods. Number of Pages The number of data pages can be obtained by using DBCC IND command. Though, this an undocumented dbcc command, many experts are ok to use this command in production. However, since there is no official word from Microsoft, use this “at your own risk”. DBCC IND (AddColumn,ExistingTable,1) Before Adding the Columns 637 Adding a Column with NULL 637 Adding a column with DEFAULT value 1270 This clearly shows that pages are physically modified. Please note, a high value indicated in the Adding a column with DEFAULT value  column is also a result of page splits. Continues…

    Read the article

  • System Requirements of a write-heavy applications serving hundreds of requests per second

    - by Rolando Cruz
    NOTE: I am a self-taught PHP developer who has little to none experience managing web and database servers. I am about to write a web-based attendance system for a very large userbase. I expect around 1000 to 1500 users logged-in at the same time making at least 1 request every 10 seconds or so for a span of 30 minutes a day, 3 times a week. So it's more or less 100 requests per second, or at the very worst 1000 requests in a second (average of 16 concurrent requests? But it could be higher given the short timeframe that users will make these requests. crosses fingers to avoid 100 concurrent requests). I expect two types of transactions, a local (not referring to a local network) and a foreign transaction. local transactions basically download userdata in their locality and cache it for 1 - 2 weeks. Attendance equests will probably be two numeric strings only: userid and eventid. foreign transactions are for attendance of those do not belong in the current locality. This will pass in the following data instead: (numeric) locality_id, (string) full_name. Both requests are done in Ajax so no HTML data included, only JSON. Both type of requests expect at the very least a single numeric response from the server. I think there will be a 50-50 split on the frequency of local and foreign transactions, but there's only a few bytes of difference anyways in the sizes of these transactions. As of this moment the userid may only reach 6 digits and eventid are 4 to 5-digit integers too. I expect my users table to have at least 400k rows, and the event table to have as many as 10k rows, a locality table with at least 1500 rows, and my main attendance table to increase by 400k rows (based on the number of users in the users table) a day for 3 days a week (1.2M rows a week). For me, this sounds big. But is this really that big? Or can this be handled by a single server (not sure about the server specs yet since I'll probably avail of a VPS from ServInt or others)? I tried to read on multiple server setups Heatbeat, DRBD, master-slave setups. But I wonder if they're really necessary. the users table will add around 500 1k rows a week. If this can't be handled by a single server, then if I am to choose a MySQL replication topology, what would be the best setup for this case? Sorry, if I sound vague or the question is too wide. I just don't know what to ask or what do you want to know at this point.

    Read the article

  • 65536% Autogrowth!

    - by Tara Kizer
    Twice a year, we move our production systems to our disaster recovery site.  Last Saturday night was one of those days.  There are about 50 SQL Server databases to be moved to the DR site, which is done via database mirroring.  It takes only a few seconds to failover, but some databases have a bit more involved work such as setting up replication.  Everything went relatively smooth, but we encountered a weird bug on our most mission critical system.  After everything was successfully failed over to the DR site, it was noticed that mirroring was in a suspended state on one of the databases.  We thought we had run into a SQL Server 2005 bug that we had been encountering and were working with Microsoft on a fix.  Microsoft did fix it in both SQL Server 2005 service pack 3 cumulative update package 13 and service pack 4 cumulative update package 2, however SP3 CU13 and SP4 both recently failed on this system so we were not patched yet with the bug fix.  As the suspended state was causing us issues with replication, we dropped mirroring.  We then noticed we had 10MB of free disk space on the mount point where the principal’s data files are stored.  I knew something went amiss as this system should have at least 150GB free on that mount point.  I immediately checked the main database’s data file and was shocked to see an autgrowth size of 65536%.  The data file autogrew right before mirroring went into the suspended state. 65536%! I didn’t have a lot of time to research if this autgrowth problem was a known SQL Server bug, so I deferred that research to today.  A quick Google search yielded no results but emphasis on “quick”.  I checked our performance system, which was recently restored with a copy of the affected production database, and found the autogrowth setting to be 512MB.  So this autogrowth bug was encountered sometime in the last two weeks.  On February 26th, we had attempted to install SQL 2005 SP4 on production, however it had failed (PSS case open with Microsoft).  I suspected that the SP4 failure was somehow related to this autgrowth bug although that turned out not to be the case. I then tweeted (@TaraKizer) about this problem to see if the SQL Server community (#sqlhelp) had any insights.  It seems several people have either heard of this bug or encountered it.  Aaron Bertrand (blog|twitter) referred me to this Connect item. Our affected database originated on SQL Server 2000 and was upgraded to SQL Server 2005 in 2007.  Back on SQL Server 2000, we were using the default file growth setting which was a percentage.  Sometime after the 2005 upgrade is when we changed it to 512MB.  Our situation seemed to fit the bug Aaron referred to me, so now the question was whether Microsoft had fixed it yet. I received a reply to my tweet from Amit Banerjee (twitter) that it had been fixed in SP3 CU1 (KB958004).  My affected system is SP3 CU8, so I was initially confused why we had encountered the bug.  Because I don’t read things fully, I had missed that there are additional steps you have to follow after applying the bug fix.  Amit set me straight.  Although you can read this information in the KB article, I will also copy it here in case you are as lazy as me and miss the most important section of it (although if you are as lazy as me, you won’t have read this far down my blog post): This hotfix will prevent only future occurrences of this problem. For example, if you restore a database from SQL Server 2000 to a SQL Server 2005 instance that contains this hotfix, this problem will not occur. However, if you already have a database that is affected by this problem, you must follow these steps to resolve this problem manually: Apply this hotfix. Set the file growth settings for the affected files to percentage settings, and then set the settings back to megabyte settings. Take the database offline, and then bring it back online. Verify that the values of the is_percent_growth column are correct in the sys.database_files system table and in the sys.master_files system table.

    Read the article

  • What email server should I choose?

    - by DCC
    I need a secure email server installed in debian-lenny with users in a mysql table. Also users are from multiple domains. Quota should be in mysql or a global variable for all users. What are my options ? THanks in advance for your help.

    Read the article

  • Wamp server that works over a network

    - by user28233
    I have 2 computers. Both with w7 as os. I have installed wampserver on one of them. I have mysql database on wampserver. Then I have made a vb.net program to connect to mysql database. I have put the program on both computers. What I want to do is for those two programs to see the same database that is on one computer. For them to be able to add, delete, update that 1 database. How do I do that? How do I network the mysql database? Do I also have to install wampserver on the other computer? What do I do? Please enlighten me.

    Read the article

  • Download Instagram photos of any user on Windows, Mac & Linux using 4K Stogram

    - by Gopinath
    Instagram is one of most popular mobile applications used to take pictures and share them online. Initially released for iOS devices, Instagram quickly won the hearts of photographers and made into top 10 apps charts in Apple App Store. With a simple interface and quick photo processing features Instagram popularity grown by leaps and bounds. Few months ago after releasing an Android application,  Facebook acquired Instagram for around 1 billion dollars. Undoubtedly Instagram is the most popular photo sharing app on mobile devices and it’s where you find best photographers share their beautiful pictures online. Being the best source of photographs many users love to browse and download photographs. For those who are looking for downloading photographs from Instagram here is an excellent free desktop application – 4K Stogram. Once installed 4K Stogram, you will be able to quickly browse and download photographs of any user by just entering user name in search box and clicking on Follow button. The app quickly searches for the recent photographs and displays the thumbnails as it downloads them. You can double click on any photograph to view it with your Operating Systems default photograph viewer. Open the location of the photograph and you will find the rest of downloaded photos in the same folder. Once you enter name of the user, 4K Stogram automatically keeps tracking the user until you manually delete it. Also it automatically downloads the latest photographs when its launched again. Like this you can track multiple users and download their photographs Here is a quick run down of 4K Stogram features View and download photos of Instagram users No need to have Instagram account, just enter user name and download all photos Track multiple users and download their photos Automatically download latest photos All new photos are marked with an indictor for quick reference Free and open source desktop application that runs on Windows, Mac & Linux Link to 4K Stogram Application website

    Read the article

  • Error when setting Piwik analytics

    - by bertran
    I've uploaded the latest version of Piwik unto my web server, which is hosted by go daddy.com, on a linux hosting plan. I'm setting it up (accessing it from my browser as instructed) and I have the "Piwikinstallation" page open on step 3 (database set-up ) of 9. I don't know what to imput in the field "database server"... the default is the number 127.0.0.1 When I leave that input as is, and click "Next" leaving the gives the error: "Error when trying to connect to database server: SQLSTATE[HY000] [2013] Lost connection to MySQL server at 'reading initial communication packet', system error: 111" and changing that input to "localhost" gives me another error: "Error when trying to connect to database server:SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)"

    Read the article

  • Oracle University Nuevos cursos (Week 14)

    - by swalker
    Oracle University ha publicado recientemenete las siguentes formaciones (o versiones) nuevos: Database Oracle Data Modeling and Relational Database Design (4 days) Fusion Middleware Oracle Directory Services 11g: Administration (5 days) Oracle Unified Directory 11g: Services Deployment Essentials (2 days) Oracle GoldenGate 11g Management Pack: Overview (1 day) Business Intelligence & Datawarehousing Oracle Database 11g: Data Mining Techniques (2 days) Oracle Solaris Oracle Solaris 10 System Administration for HP-UX Administrators (5 days) E-Business Suite R12.x Oracle Time and Labor Fundamentals Póngase en contacto con el equipo local de Oracle University para conocer las fechas y otros detalles de los cursos. Manténgase conectado a Oracle University: LinkedIn OracleMix Twitter Facebook Google+

    Read the article

  • IBM DB2 9.7, DBADM and my Rubik's Cube

    It's a challenge to adapt to change, but the changes in IBM DB2 9.7's Database Administrator authority bring significant database security benefits. Join Rebecca Bond as she shares some twists, some turns, and some clues regarding DB2 9.7's Database Administrator (DBADM) authority.

    Read the article

  • Oracle University Nouveaux cours (Week 14)

    - by swalker
    Parmi les nouveautés d’Oracle Université de ce mois-ci, vous trouverez : Database Oracle Data Modeling and Relational Database Design (4 days) Fusion Middleware Oracle Directory Services 11g: Administration (5 days) Oracle Unified Directory 11g: Services Deployment Essentials (2 days) Oracle GoldenGate 11g Management Pack: Overview (1 day) Business Intelligence & Datawarehousing Oracle Database 11g: Data Mining Techniques (2 days) Oracle Solaris Oracle Solaris 10 System Administration for HP-UX Administrators (5 days) E-Business Suite R12.x Oracle Time and Labor Fundamentals Contacter l’ équipe locale d’ Oracle University pour toute information et dates de cours. Restez connecté à Oracle University : LinkedIn OracleMix Twitter Facebook Google+

    Read the article

  • Exadata at Oracle Openworld - A guide to sessions

    - by Javier Puerta
    A large number of sessions focusing on Exadata will be taking place during the week of Oracle Openworld in San Francisco. To help you organize your schedule I am including below a list of sessions and events around Exadata that you will find of interest. PARTNER SPECIFIC SESSIONS Date/Time/Location  Session Sunday, Sep 30, 3:30 PM - 4:30 PM - Moscone South - 301 Building a Winning Services Practice with Oracle’s Engineered Systems.- This session kicks off a week-long session on Oracle’s engineered systems, from Oracle Database Appliance to Oracle Exadata, Oracle Exalogic, Oracle Exalytics, Oracle Big Data Appliance, and Oracle SPARC SuperCluster. Hear about what is to come in the week ahead in terms of engineered systems. As an ideal consolidation platform for database workloads, Oracle Exadata generates significant services opportunities. This session reviews the range of partner-led services that support Oracle Exadata deployments.   Monday, October 1st, 2011 at 15:30 - 18:00 PST Grand Hyatt San Francisco 345 Stockton Street, San Francisco (Conference Theater) (It is a 15 minute walk from OOW Moscone Center. See directions here) Exadata & Manageability EMEA Partner Community Forum.- Listen to other partners share their experiences in selling and implementing Exadata and Manageability projects, and have a direct dialogue with some of the Oracle executives that are driving the strategy of the company in these areas. Agenda Welcome - Hans-Peter Kipfer, VP, Engineered Systems Oracle EMEA Next challenges in building and managing clouds - Javier Cabrerizo, VP, Business Development for Exadata, Oracle Corp. Partner Experiences: IT modernization, simplification and cost reduction: The case of a customer in Transportation & Logistics with custom applications and SAP. - Francisco Bermudez, Country Leader Infrastructure Services, Capgemini, Spain Nvision cloud project - Dmitry Krasilov, Head of Oracle Competence Center, Nvision Group, Russia From Exadata Ready to Exadata Optimized: An ISV Experience - Miguel Alves, Product Business Solutions Manager, WeDo Technologies, Portugal To confirm your participation send an email to [email protected]  Wednesday, Oct 3, 11:45 AM - 12:45 PM - Marriott Marquis - Golden Gate B  Building a Practice with Exadata Database Machine.- As an ideal consolidation platform for database workloads, Oracle’s Exadata Database Machine generates significant services opportunities. In this session, learn about the range of partner-led services that support Exadata Database Machine deployments.  Other Engineered Systems sessions for Partners at the Oracle PartnerNetwork Exchange  Click here.-   OOW CUSTOMER SESSIONS   Download the Focus On Exadata guide for a full list of Exadata OOW sessions.  

    Read the article

  • Oracle University Nuevos cursos (Week 10)

    - by swalker
    Oracle University ha publicado recientemenete las siguentes formaciones (o versiones) nuevos: Database RAC & Grid Infrastructure for Oracle Solaris System Administration (1 day) Oracle Database 11g: Performance Tuning (Training On Demand) Development Tools Oracle Database: Program with PL/SQL (Training On Demand) MySQL MySQL for Database Administrators (Training On Demand) Fusion Middleware Oracle WebCenter Portal 11g: Build Portals With Spaces (3 days) Oracle WebCenter Content 11g: Site Studio Essentials (5 days) Oracle BPM 11g Modeling (3 days) Business Intelligence & Datawarehousing Oracle BI Applications 7.9.6: Implementation for Oracle EBS (4 days) Oracle BI Applications 7.9.6: Implementation for Siebel CRM (4 days) Oracle BI 11g R1: Build Repositories (Training on Demand) Fusion Applications Fusion Applications: Extend Applications with ADF (5 days) E-Business Suite R12.x Extend Oracle Applications: Building OA Framework Applications (Training On Demand) PeopleSoft PeopleSoft Integration Tools Rel 8.50 (Training On Demand) Póngase en contacto con el equipo local de Oracle University para conocer las fechas y otros detalles de los cursos.

    Read the article

  • SQL Down Under Podcast 50 - Guest Louis Davidson now online

    - by Greg Low
    Hi Folks,I've recorded an interview today with SQL Server MVP Louis Davidson. In it, Louis discusses some of his thoughts on database design and his latest book.You'll find the podcast here: http://www.sqldownunder.com/Resources/Podcast.aspxAnd you'll find his latest book (Pro SQL Server 2012 Relational Database Design and Implementation) here: http://www.amazon.com/Server-Relational-Database-Implementation-Professional/dp/1430236957/ref=sr_1_2?ie=UTF8&qid=1344997477&sr=8-2&keywords=louis+davidsonEnjoy!

    Read the article

  • Multitenant Design for SQL Azure: White Paper Available

    - by Herve Roggero
    Cloud computing is about scaling out all your application tiers, from web application to the database layer. In fact, the whole promise of Azure is to pay for just what you need. You need more IIS servers? No problemo... just spin another web server. You expect to double your storage needs for Azure Tables? No problemo; you are covered there too... just pay for your storage needs. But what about the database tier, SQL Azure? How do you add new databases easily, and transparently, so that your application simply uses more of SQL Azure if its needs to? Without changing a single line of code? And what if you need to scale back down? Welcome to the world of database scalability. There are many terms that describe database scalability, including data federation, multitenant designs, and even NoSQL depending on the technical solution you are implementing.  Because SQL Azure is a transactional database system, NoSQL is not really an option. However data federation and multitenant designs offer some very interesting scalability options that are worth considering. Data federation, a feature of SQL Azure that will be offered in the future, offers very interesting capabilities available natively on the SQL Azure platform. More to come in a few weeks... Multitenant designs on the other hand are design practices and technologies designed to help you reach flexible scalability options not available otherwise. The first incarnation of such a method was made available on CodePlex as an open source project (http://enzosqlshard.codeplex.com).  This project was an attempt to provide a sharding library for educational purposes.  All that sounds really cool... and really esoteric... almost a form of database "voodoo"... However after being on multiple Azure projects I am starting to see a real need. Customers want to be able to free themselves from the database tier, so that if they have 10 new customers tomorrow, all they need to do is add 2 more SQL Azure instances. It's that simple. How you achieve this, and suggested application design guidelines, are available in a white paper I just published.  The white paper offers two primary sections. The first section describes the business and technical problem at hand, and how to classify it according to specific design patterns. For example, I discuss compressed shards through schema separation. The second section offers a method for addressing the needs of a multitenant design using a new library, the big bother of the codeplex project mentioned previously (that I created earlier this year), complete with management interface and such. A Beta of this platform will be made available within weeks; as soon as the documentation will be ready.   I would like to ask you to drop me a quick email at [email protected] if you are going to download the white paper. It's not required, but it would help me get in touch with you for feedback.  You can download this white paper here:   http://www.bluesyntax.net/files/EnzoFramework.pdf . Thank you, and I am looking for feedback, thoughts and implementation opportunities.

    Read the article

  • Cannot delete files on samba share when authenticated using kerberos

    - by ondra
    I have a samba server that authenticates users using LDAP, however it does have kerberos enabled as well. Unfortunately users authenticated using kerberos cannot delete files. I can test this using smbclient - if I use the '-k' switch, I cannot delete the files, if I don't, I can. The users does have read/write/execute access to the directory from where he is trying to delete the file. Any idea what might be wrong?

    Read the article

  • Demo on Data Guard Protection From Lost-Write Corruption

    - by Rene Kundersma
    Today I received the news a new demo has been made available on OTN for Data Guard protection from lost-write corruption. Since this is a typical MAA solution and a very nice demo I decided to mention this great feature also in this blog even while it's a recommended best practice for some time. When lost writes occur an I/O subsystem acknowledges the completion of the block write even though the write I/O did not occur in the persistent storage. On a subsequent block read on the primary database, the I/O subsystem returns the stale version of the data block, which might be used to update other blocks of the database, thereby corrupting it.  Lost writes can occur after an OS or storage device driver failure, faulty host bus adapters, disk controller failures and volume manager errors. In the demo a data block lost write occurs when an I/O subsystem acknowledges the completion of the block write, while in fact the write did not occur in the persistent storage. When a primary database lost write corruption is detected by a Data Guard physical standby database, Redo Apply (MRP) will stop and the standby will signal an ORA-752 error to explicitly indicate a primary lost write has occurred (preventing corruption from spreading to the standby database). Links: MOS (1302539.1). "Best Practices for Corruption Detection, Prevention, and Automatic Repair - in a Data Guard Configuration" Demo MAA Best Practices Rene Kundersma

    Read the article

  • Rule Engine in .net

    - by user641812
    I have to import data from excel to SQL database. Excel data contains various parameters and there value like P1,P1,P4,P5 etc. I have to apply business rules Like if( P1 100 and P1 < 200) then insert the record in database. Similarly in some cases string values are also validated. Can I have any open source rule engine that contains UI to change , add , delete the rules. Am using C# to read the excel and and insert the records One more thing which is best approach: Read excel first and store every record as an object in a collection, then iterate through the collection, apply business rules on every object and then insert record in the database Or Read one record from excel apply business rule and then insert record in the database. Repeat the process for whole excel.

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >