Search Results

Search found 33257 results on 1331 pages for 'django database'.

Page 145/1331 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • insert multiple rows into database from arrays

    - by Mark
    Hi, i need some help with inserting multiple rows from different arrays into my database. I am making the database for a seating plan, for each seating block there is 5 rows (A-E) with each row having 15 seats. my DB rows are seat_id, seat_block, seat_row, seat_number, therefore i need to add 15 seat_numbers for each seat_row and 5 seat_rows for each seat_block. I mocked it up with some foreach loops but need some help turning it into an (hopefully single) SQL statement. $blocks = array("A","B","C","D"); $seat_rows = array("A","B","C","D","E"); $seat_nums = array("1","2","3","4","5","6","7","8","9","10","11","12","13","14","15"); foreach($blocks as $block){ echo "<br><br>"; echo "Block: " . $block . " - "; foreach($seat_rows as $rows){ echo "Row: " . $rows . ", "; foreach($seat_nums as $seats){ echo "seat:" . $seats . " "; } } } Maybe there's a better way of doing it instead of using arrays? i just want to avoid writing an SQL statement that is over 100 lines long ;) (im using codeigniter too if anyone knows of a CI specific way of doing it but im not too bothered about that)

    Read the article

  • Designing model/database for distance between any two locations (that may change)

    - by Yo Ludke
    We should create a web app which has a number of events each with a location (created as user-generated content, so the number of events will be increasingly large). The distance between any events should be available, for example to determine the top 5 closest events and such things. Users may change the locations of events. How should one design the database/model for this (in a scalable way)? I was thinking of doing it with a "distance table" (like so http://www.deutschland-tourist.info/images/entfernungstabelle.gif). Then every time, if a location changes, one row and one column have to be recalculated (this should be done with a delayed job, because it is not important to have the changes instantly). Possible problems in Scaling: Database to large (n² items for n events), too much calculation to be done. For example we should see if this is okay for 10.000 users. If each has created just one event, then this would be 100 million integers... Do you think this would be a good way to do it efficiently? How could one realize such a distance table with an rails model? Is it possible with a SQL databse? Would you start other approaches?

    Read the article

  • Schema for storing "binary" values, such as Male/Female, in a database

    - by latentflip
    Intro I am trying to decide how best to set up my database schema for a (Rails) model. I have a model related to money which indicates whether the value is an income (positive cash value) or an expense (negative cash value). I would like separate column(s) to indicate whether it is an income or an expense, rather than relying on whether the value stored is positive or negative. Question: How would you store these values, and why? Have a single column, say Income, and store 1 if it's an income, 0 if it's an expense, null if not known. Have two columns, Income and Expense, setting their values to 1 or 0 as appropriate. Something else? I figure the question is similar to storing a person's gender in a database (ignoring aliens/transgender/etc) hence my title. My thoughts so far Lookup might be easier with a single column, but there is a risk of mistaking 0 (false, expense) for null (unknown). Having seperate columns might be more difficult to maintain (what happens if we end up with a 1 in both columns? Maybe it's not that big a deal which way I go, but it would be great to have any concerns/thoughts raised before I get too far down the line and have to change my code-base because I missed something that should have been obvious! Thanks, Philip

    Read the article

  • Creating android app Database with big amount of data

    - by Thomas
    Hi all, The database of my application need to be filled with a lot of data, so during onCreate(), it's not only some create table sql instructions, there is a lot of inserts. The solution I chose is to store all this instructions in a sql file located in res/raw and which is loaded with Resources.openRawResource(id). It works well but I face to encoding issue, I have some accentuated caharacters in the sql file which appears bad in my application. This my code to do this : public String getFileContent(Resources resources, int rawId) throws IOException { InputStream is = resources.openRawResource(rawId); int size = is.available(); // Read the entire asset into a local byte buffer. byte[] buffer = new byte[size]; is.read(buffer); is.close(); // Convert the buffer into a string. return new String(buffer); } public void onCreate(SQLiteDatabase db) { try { // get file content String sqlCode = getFileContent(mCtx.getResources(), R.raw.db_create); // execute code for (String sqlStatements : sqlCode.split(";")) { db.execSQL(sqlStatements); } Log.v("Creating database done."); } catch (IOException e) { // Should never happen! Log.e("Error reading sql file " + e.getMessage(), e); throw new RuntimeException(e); } catch (SQLException e) { Log.e("Error executing sql code " + e.getMessage(), e); throw new RuntimeException(e); } The solution I found to avoid this is to load the sql instructions from a huge static final string instead of a file, and all accentutated characters appears well. But Isn't there a more elegant way to load sql instructions than a big static final String attribute with all sql instructions ? Thanks in advance Thomas

    Read the article

  • Page URL and database organization.

    - by shurik2533
    I want that its name would be the page address. For example, if page has heading "Some Page", than its address should be http://somesite/some_page/. "some_page"-name generated by system automatically. "some_page" - is the unique identifier of page. The problem in that the user in the future can enter a name which already exists that will cause an error. It is necessary to find an optimum variant of the decision of a problem for great volumes of the data. I have solved a problem as follows: The page identifier in a database is the name of page and a suffix which is by default equal to zero. At page addition there is a check on existence. If such page does not exist, the suffix is equal 0 and its name is "some_page", if page is exist, than - search for the maximum number of a suffix and suffix=suffix+1 and page name become "some_page_1". For this I create in a database the compound key from fields "suffix" and "pageName": Table Pages suffix|pageName |pageTitle 0 |some_page |Some Page 1 |some_page |Some Page 0 |other_page|Other Page Addition of pages occurs through stored procedure: CREATE PROCEDURE addPage (pageNameVal VARCHAR(100), pageTitleVal VARCHAR(100)) BEGIN DECLARE v INT DEFAULT 0; SELECT MAX(suffix) FROM pages WHERE pageName=pageNameVal INTO v; IF v >= 0 THEN SET v = v + 1; ELSE SET v = 0; END IF; INSERT INTO pages (suffix, pageName) VALUES (pageNameVal, v, pageTitleVal); END; Whether there are more the best decisions?

    Read the article

  • How to handle product ratings in a database

    - by Mel
    Hello, I would like to know what is the best approach to storing product ratings in a database. I have in mind the following two (simplified, and assuming a MySQL db) scenarios: Scenario 1: Create two columns in the product table to store number of votes and the sum of all votes. Use columns to get an average on the product display page: products(productedID, productName, voteCount, voteSum) Pros: I will only need to access one table, and thus execute one query to display product data and ratings. Cons: Write operations will be executed in a table whose original purpose is only to furnish product data. Scenario 2: Create an additional table to store ratings. products(productID, productName) ratings(productID, voteCount, voteSum) Pros: Isolate ratings into a separate table, leaving the products table to furnish data on available products. Cons: I will have to execute two separate queries on product page requests (one for data and another for ratings). In terms of performance, which of the following two approaches is best: Allow users to execute an occasional write query to a table that will handle hundreds of read requests? Execute two queries at every product page, but isolate the write query into a separate table. I'm a novice to database development, and often find myself struggling with simple questions such as these. Many thanks,

    Read the article

  • The best way to structure this database?

    - by James P
    At the moment I'm doing this: gems(id, name, colour, level, effects, source) id is the primary key and is not auto-increment. A typical row of data would look like this: id => 40153 name => Veiled Ametrine colour => Orange level => 80 effects => +12 sp, +10 hit source => Ametrine (Some of you gamers might see what I'm doing here :) ) But I realise this could be sorted a lot better. I have studied database relationships and secondary keys in my A-Level computing class but never got as far as to set one up properly. I just need help with how this database should be organised, like what tables should have what data with what secondary and foreign keys? I was thinking maybe 3 tables: gem, effects, source. Which then have relationships to each other? Can anyone shed some light on this? Is a complex way like I'm proposing really the way to go or should I just carry on with what I'm doing? Cheers.

    Read the article

  • Unaccounted for database size

    - by Nazadus
    I currently have a database that is 20GB in size. I've run a few scripts which show on each tables size (and other incredibly useful information such as index stuff) and the biggest table is 1.1 million records which takes up 150MB of data. We have less than 50 tables most of which take up less than 1MB of data. After looking at the size of each table I don't understand why the database shouldn't be 1GB in size after a shrink. The amount of available free space that SqlServer (2005) reports is 0%. The log mode is set to simple. At this point my main concern is I feel like I have 19GB of unaccounted for used space. Is there something else I should look at? Normally I wouldn't care and would make this a passive research project except this particular situation calls for us to do a backup and restore on a weekly basis to put a copy on a satellite (which has no internet, so it must be done manually). I'd much rather copy 1GB (or even if it were down to 5GB!) than 20GB of data each week. sp_spaceused reports the following: Navigator-Production 19184.56 MB 3.02 MB And the second part of it: 19640872 KB 19512112 KB 108184 KB 20576 KB while I've found a few other scripts (such as the one from two of the server database size questions here, they all report the same information either found above or below). The script I am using is from SqlTeam. Here is the header info: * BigTables.sql * Bill Graziano (SQLTeam.com) * graz@<email removed> * v1.11 The top few tables show this (table, rows, reserved space, data, index, unused, etc): Activity 1143639 131 MB 89 MB 41768 KB 1648 KB 46% 1% EventAttendance 883261 90 MB 58 MB 32264 KB 328 KB 54% 0% Person 113437 31 MB 15 MB 15752 KB 912 KB 103% 3% HouseholdMember 113443 12 MB 6 MB 5224 KB 432 KB 82% 4% PostalAddress 48870 8 MB 6 MB 2200 KB 280 KB 36% 3% The rest of the tables are either the same in size or smaller. No more than 50 tables. Update 1: - All tables use unique identifiers. Usually an int incremented by 1 per row. I've also re-indexed everything. I ran the dbcc shrink command as well as updating the usage before and after. And over and over. An interesting thing I found is that when I restarted the server and confirmed no one was using it (and no maintenance procs are running, this is a very new application -- under a week old) and when I went to run the shrink, every now and then it would say something about data changed. Googling yielded too few useful answers with the obvious not applying (it was 1am and I disconnected everyone, so it seems impossible that was really the case). The data was migrated via C# code which basically looked at another server and brought things over. The quantity of deletes, at this point in time, are probably under 50k in rows. Even if those rows were the biggest rows, that wouldn't be more than 100M I would imagine. When I go to shrink via the GUI it reports 0% available to shrink, indicating that I've already gotten it as small as it thinks it can go. Update 2: sp_spaceused 'Activity' yields this (which seems right on the money): Activity 1143639 134488 KB 91072 KB 41768 KB 1648 KB Fill factor was 90. All primary keys are ints. Here is the command I used to 'updateusage': DBCC UPDATEUSAGE(0); Update 3: Per Edosoft's request: Image 111975 2407773 19262184 It appears as though the image table believes it's the 19GB portion. I don't understand what this means though. Is it really 19GB or is it misrepresented? Update 4: Talking to a co-worker and I found out that it's because of the pages, as someone else here has also state the potential for that. The only index on the image table is a clustered PK. Is this something I can fix or do I just have to deal with it? The regular script shows the Image table to be 6MB in size. Update 5: I think I'm just going to have to deal with it after further research. The images have been resized to be roughly 2-5KB each and on a normal file system doesn't consume much space but on SqlServer it seems to consume considerably more. The real answer, in the long run, will likely be separating that table in to another partition or something similar.

    Read the article

  • After extending the User model in django, how do you create a ModelForm?

    - by mlissner
    I extended the User model in django to include several other variables, such as location, and employer. Now I'm trying to create a form that has the following fields: First name (from User) Last name (from User) Location (from UserProfile, which extends User via a foreign key) Employer (also from UserProfile) I have created a modelform: from django.forms import ModelForm from django.contrib import auth from alert.userHandling.models import UserProfile class ProfileForm(ModelForm): class Meta: # model = auth.models.User # this gives me the User fields model = UserProfile # this gives me the UserProfile fields So, my question is, how can I create a ModelForm that has access to all of the fields, whether they are from the User model or the UserProfile model? Hope this makes sense. I'll be happy to clarify if there are any questions.

    Read the article

  • How to create an exception folder in a django site?

    - by ninja123
    There are a few folders where I house my django site that I want to be rendered as it would on any other non-django site. Namely, forum (vbulletin) and cpanel. I currently run the site with fastcgi. My .htaccess looks like this: AddHandler application/x-httpd-php5 .htm AddHandler application/x-httpd-php5 .html AddHandler fastcgi-script .fcgi Options +FollowSymLinks RewriteEngine On RewriteBase / AddHandler application/x-httpd-php5 .htm RewriteCond %{REQUEST_URI} !(mysite.fcgi) RewriteRule ^(.*)$ mysite.fcgi/$1 [QSA,L] What are lines I can add so www.mysite.com/forum can not be picked up by django url and be rendered as it would do normally. Thanks.

    Read the article

  • How do I configure the Python logging module in Django?

    - by mipadi
    I'm trying to configure logging for a Django app using the Python logging module. I have placed the following bit of configuration code in my Django project's settings.py file: import logging import logging.handlers import os date_fmt = '%m/%d/%Y %H:%M:%S' log_formatter = logging.Formatter(u'[%(asctime)s] %(levelname)-7s: %(message)s (%(filename)s:%(lineno)d)', datefmt=date_fmt) log_dir = os.path.join(PROJECT_DIR, "var", "log", "my_app") log_name = os.path.join(log_dir, "nyrb.log") bytes = 1024 * 1024 # 1 MB if not os.path.exists(log_dir): os.makedirs(log_dir) handler = logging.handlers.RotatingFileHandler(log_name, maxBytes=bytes, backupCount=7) handler.setFormatter(log_formatter) handler.setLevel(logging.DEBUG) logging.getLogger().setLevel(logging.DEBUG) logging.getLogger().addHandler(handler) logging.getLogger(__name__).info("Initialized logging subsystem") At startup, I get a couple Django-related messages, as well as the "Initialized logging subsystem", in the log files, but then all the log messages end up going to the web server logs (/var/log/apache2/error.log, since I'm using Apache), and use the standard log format (not the formatter I designated). Am I configuring logging incorrectly?

    Read the article

  • Does Django tests run slower on the mac compared to linux?

    - by Thierry Lam
    I'm currently developing my Django projects on both: Mac OS X 10.5, 32 bit Ubuntu Server 9.10 64 bits (1 CPU, 512MB RAM) Both of the above OS are using: Python 2.6.4 Django 1.1.1 MySQL 5.1 Running 12 tests for one of my application take: Mac: 57.513s Linux: 30.935s EDIT: Mac Hardware Spec: MacBook Pro 2.2 GHz Intel Core 2 Duo 3GB RAM I'm running the Ubuntu OS on the same mac above through VMware Fusion 2.0.6. You might argue that Ubuntu Server 64 bits is faster but I have observed a similar speed difference on Ubuntu 8.10 32 bits desktop edition. Even if I turn off my linux VM and other mac applications, I still experience the slowness. Has anyone else experienced this Django test speed difference across those two OS?

    Read the article

  • django-lfs upload Image doesn't work on some environment ?

    - by vernomcrp
    yesterday, after I complete setup django-lfs without buildout. Happlily create categories and products but while I upload image to product after I push upload button its stay always 'pendings'. I use fedora django==1.1.2,PIL==1.1.7. but its work on osx. Now I try on Ubuntu9.10 with completely PIL==1.1.7 and Django==1.1.2 and its won't work. Anyone hav some good solution for this ? (i may think of flash version because upload part looklike its come from flash)

    Read the article

  • Python: How can I override one module in a package with a modified version that lives outside the pa

    - by zlovelady
    I would like to update one module in a python package with my own version of the module, with the following conditions: I want my updated module to live outside of the original package (either because I don't have access to the package source, or because I want to keep my local modifications in a separate repo, etc). I want import statements that refer to original package/module to resolve to my local module Here's an example of what I'd like to do using specifics from django, because that's where this problem has arisen for me: Say this is my project structure django/ ... the original, unadulterated django package ... local_django/ conf/ settings.py myproject/ __init__.py myapp/ myfile.py And then in myfile.py # These imports should fetch modules from the original django package from django import models from django.core.urlresolvers import reverse # I would like this following import statement to grab a custom version of settings # that I define in local_django/conf/settings.py from django.conf import settings def foo(): return settings.some_setting Can I do some magic with the __import__ statement in myproject/__init__.py to accomplish this? Is there a more "pythonic" way to achieve this?

    Read the article

  • Does Python Django support custom SQL and denormalized databases with no Foreign Key relationships?

    - by Jay
    I've just started learning Python Django and have a lot of experience building high traffic websites using PHP and MySQL. What worries me so far is Python's overly optimistic approach that you will never need to write custom SQL and that it automatically creates all these Foreign Key relationships in your database. The one thing I've learned in the last few years of building Chess.com is that its impossible to NOT write custom SQL when you're dealing with something like MySQL that frequently needs to be told what indexes it should use (or avoid), and that Foreign Keys are a death sentence. Percona's strongest recommendation was for us to remove all FKs for optimal performance. Is there a way in Django to do this in the models file? create relationships without creating actual DB FKs? Or is there a way to start at the database level, design/create my database, and then have Django reverse engineer the models file?

    Read the article

  • Server-side access to Client Browser's Latitude/Longitude using Django.

    - by ZenGyro
    Hello, So i am writing a little app that compares a user's position against a database on web-based server written using Django and performs some functions with it. Accessing the browser's geolocation data (in supported browsers ) is fairly trivial using JavaScript. But what is the best way to allow the Django server to access the longitude and latitude variables? Is it best to wrap them up as a JSON object and send to the server via POST? Or is there some easier (Geo)Django-based way to access the Navigator.geolocation browser object. Please forgive a newbie a question like this, but my Google-Fuing only seems to find ways to insert variables into JavaScript via template tag, whereas I need it to work the other way! Any advice or code snippets greatly appreciated. Feel free to talk to me like I am an idiot.

    Read the article

  • How to use external static files with Django (serving external files once again)?

    - by Tomas Novotny
    Hi, even after Googling and reading all relevant posts at StackOverflow, I still can't get static files working in my Django application. Here is how my files look: settings.py MEDIA_ROOT = os.path.join(SITE_ROOT, 'static') MEDIA_URL = '/static/' urs.py from DjangoBandCreatorSite.settings import DEBUG if DEBUG: urlpatterns += patterns('', ( r'^static/(?P<path>.*)$', 'django.views.static.serve', {'document_root': 'static'} )) template: <script type="text/javascript" src="/static/jquery.js"></script> <script type="text/javascript"> I am trying to use jquery.js stored in directory "static". I am using: Windows XP Python 2.6.4 Django 1.2.3 Thank you very much for any help

    Read the article

  • can't save form content to database, help plsss!!

    - by dana
    i'm trying to save 100 caracters form user in a 'microblog' minimal application. my code seems to not have any mystakes, but doesn't work. the mistake is in views.py, i can't save the foreign key to user table models.py looks like this: class NewManager(models.Manager): def create_post(self, post, username): new = self.model(post=post, created_by=username) new.save() return new class New(models.Model): post = models.CharField(max_length=120) date = models.DateTimeField(auto_now_add=True) created_by = models.ForeignKey(User, blank=True) objects = NewManager() class NewForm(ModelForm): class Meta: model = New fields = ['post'] # widgets = {'post': Textarea(attrs={'cols': 80, 'rows': 20}) def save_new(request): if request.method == 'POST': created_by = User.objects.get(created_by = user) date = request.POST.get('date', '') post = request.POST.get('post', '') new_obj = New(post=post, date=date, created_by=created_by) new_obj.save() return HttpResponseRedirect('/') else: form = NewForm() return render_to_response('news/new_form.html', {'form': form},context_instance=RequestContext(request)) i didn't mention imports here - they're done right, anyway. my mistake is in views.py, when i try to save it says: local variable 'created_by' referenced before assignment it i put created_py as a parameter, the save needs more parameters... it is really weird help please!!

    Read the article

  • Django 0.0.0.0:80; can't access remotely

    - by user349555
    Hello, I'm trying to access my Django server from another computer on the same network. I've set up my server and can view everything correctly usingpython manage.py runserver and going to http://127.0.0.1:8000 but when I try to use python manage.py runserver 0.0.0.0:80, I can't view my Django page from another computer. The computer hosting the Django server has intranet IP 192.168.1.146. On my secondary computer, I fire up a browser and try to access http://192.168.1.146:80 to no avail. I've also forwarded port 80 (and I've tried 8000 as well) also to no avail :(. HELP!

    Read the article

  • PostgreSQL lots of writes

    - by strife911
    Hi, I am using postgreSQL for a scientific application (unsupervised clustering). The python program is multi-threaded so that each thread manages its own postmaster process (one per core). Hence, their is a lot of concurrency. Each thread-process loop infinitely though two SQL queries. The first is for reading, the second is for writing. The read operation considers 500 time the amount of rows the write operation considers. Here is the output of dstat: ----total-cpu-usage---- ------memory-usage----- -dsk/total- --paging-- --io/total- usr sys idl wai hiq siq| used buff cach free| read writ| in out | read writ 4 0 32 64 0 0|3599M 63M 57G 1893M|1524k 16M| 0 0 | 98 2046 1 0 35 64 0 0|3599M 63M 57G 1892M|1204k 17M| 0 0 | 68 2062 2 0 32 66 0 0|3599M 63M 57G 1890M|1132k 17M| 0 0 | 62 2033 2 1 32 65 0 0|3599M 63M 57G 1904M|1236k 18M| 0 0 | 80 1994 2 0 31 67 0 0|3599M 63M 57G 1903M|1312k 16M| 0 0 | 70 1900 2 0 37 60 0 0|3599M 63M 57G 1899M|1116k 15M| 0 0 | 71 1594 2 1 37 60 0 0|3599M 63M 57G 1898M| 448k 17M| 0 0 | 39 2001 2 0 25 72 0 0|3599M 63M 57G 1896M|1192k 17M| 0 0 | 78 1946 1 0 40 58 0 0|3599M 63M 57G 1895M| 432k 15M| 0 0 | 38 1937 I am pretty sure I could write more often than that for I have seen it write up to 110-140M on dstat. How can I optimize this process?

    Read the article

  • SQL Server 2005 jobs running twice in a row - using LiteSpeed

    - by Malnizzle
    Howdy! I have a SQL server (2005) backing up to a network share, who has a group of maintenance plans setup through LiteSpeed to backup different DBs. They were just set up to run two sub plans on different schedules for full/diff backups and did that just fine for a couple of months. Then I added "Clean Up" task to the subplans. Ever since that point, the backup creates another bak right after the first bak job is completed. I removed the clean up item from the subplan, and it still creates two baks when ran. Both the SQL Activity Monitor and the machine's windows application log show just one job being executed. I did this same thing to a couple of other servers backing up to the same location, and they are behaving correctly. Thoughts?

    Read the article

  • How to warehouse data that is not needed from MS SQL server

    - by I__
    I have been asked to truncate a large table in MS SQL Server 2008. The data is not needed but might be needed once every two years. It will NEVER have to be changed, only viewed. The question is, since I don't need the data on a day-to-day basis, what do I do with it to protect and back it up? Please keep in mind that I will need to have it accessible maybe once every two years, and it is FINE for us if the recovery process takes a few hours. The entire table is about 3 million rows and I need to truncate it to about 1 million rows.

    Read the article

  • PostgreSQL lots of large Arrays and Writes

    - by strife911
    Hi, I am running a python program that spawns 8 threads and as each thread launch its own postmaster process via psycopg2. This is to maximize the use of my CPU-cores (8). Each thread call a series of SQL Functions. Most of these functions go through many thousands of rows each associated to a large FLOAT8[] Array (250-300) values by using unnest() and multiplying each FLOAT8 by an another FLOAT8 associated to each row. This Array approach minimized the size of the Indexes and the Tables. The Function ends with an Insert into another Table of a row of the same form (pk INT4, array FLOAT8[]). Some SQL Functions called by python will Update a row of these kind of Tables (with large Arrays). Now I currently have configured PostgreSQL to use most of the memory for cache (effective_cache_size of 57 GB I think) and only a small amount of it for shared memory (1GB I think). First, I was wondering what the difference between Cache and Shared memory was in regards to PostgreSQL (and my application). What I have noticed is that only about 20-40% of my total CPU processing power is used during the most Read intensive parts of the application (Select unnest(array) etc). So secondly, I was wondering what I could do to improve this so that 100% of the CPU is used. Based on my observations, it does not seem to have anything to do with python or its GIL. Thanks

    Read the article

  • Column locking in innodb?

    - by mingyeow
    I know this sounds weird, but apparently one of my columns is locked. select * from table where type_id = 1 and updated_at < '2010-03-14' limit 1; select * from table where type_id = 3 and updated_at < '2010-03-14' limit 10; the first one would not finish running, while the second one completes smoothly. the only difference is the type_id Thanks in advance for your help - i have an urgent data job to finish, and this problem is driving me crazy

    Read the article

  • Maintenance Plan Reporting - Append To File - Clean Up?

    - by Adam J.R. Erickson
    Background: (SQL Server 2005, Standard Ed.) I have a maintenance plan running backups, taking a full backup 1/day, and t-log every 15 minutes. I have it set to create a text file report of each run, but that creates A LOT of files on the file server. These are hard to sort through, which makes them less useful. Question: There is an option in "Reporting and Logging" settings for appending all logs together, but how do you clean this out? If you're appending to the same log file every time, how should you make sure this file doesn't grow indefinitely? Is there a build-in function to clean out portions of appended logs like there is for cleaning out individual old log files?

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >