Search Results

Search found 72218 results on 2889 pages for 'multiple definition error'.

Page 801/2889 | < Previous Page | 797 798 799 800 801 802 803 804 805 806 807 808  | Next Page >

  • Polymorphism in SQL database tables?

    - by Patrick Daryll Glandien
    I currently have multiple tables in my database which consist of the same 'basic fields' like: name character varying(100), description text, url character varying(255) But I have multiple specializations of that basic table, which is for example that tv_series has the fields season, episode, airing, while the movies table has release_date, budget etc. Now at first this is not a problem, but I want to create a second table, called linkgroups with a Foreign Key to these specialized tables. That means I would somehow have to normalize it within itself. One way of solving this I have heard of is to normalize it with a key-value-pair-table, but I do not like that idea since it is kind of a 'database-within-a-database' scheme, I do not have a way to require certain keys/fields nor require a special type, and it would be a huge pain to fetch and order the data later. So I am looking for a way now to 'share' a Primary Key between multiple tables or even better: a way to normalize it by having a general table and multiple specialized tables.

    Read the article

  • JQuery UI sortable: restore position based on some condition

    - by dafi
    I call sortable.stop() to make an ajax call to store some data after drag&drop operation. When the ajax call returns an error (application logic error or net problem) I want to move the dragged element to its original/start position, how can I achieve it? The scenario should be user drags A to B the sortable.stop() event is called, it triggers an ajax call the ajax call returns an error inside the stop() event we get the ajax error move A to its original position user again move A to B everything is ok A remains in its new position in B Steps 6-8 are shown to clarify a successive drag can be done and previous error must be forgotten

    Read the article

  • Massive SQL issue shutting down our site.

    - by Pselus
    Our website has started timing out like crazy today. All of our clients are finding it unusable. The only error we can seem to trace down as a potential problem is this: SQLAllocHandle on SQL_HANDLE_DBC failed Error ASP Description Error Category Microsoft OLE DB Provider for ODBC Drivers I have no idea what it means or how to go about fixing it. Anyone ever encountered this error before? Currently, you can log in to our site, but then once you go to do anything else, you find yourself logged out or nothing happens. We have a lot of Ajax going on so the "nothing happens" probably has to do with the Ajax pages not loading properly due to logouts and so nothing displays to the user. Like I said, I'm at a loss. Anyone have any advice on this error?

    Read the article

  • Proper way of utilizing gearman with my php application.

    - by luckytaxi
    For now I just want to use Gearman for background processing. For example, I need to email a recipient that they have a private message waiting for them once the sender submits their message into the DB. I assume I can run the worker/client and server on my primary server but I have no problems offloading some of the tasks to a different web server. Anyways, my question is how do I handle multiple "functions?" Let's say I need a job that handles the email portion and a job to handle image manipulation. Can I have multiple functions in the worker? I've followed a couple of examples I found online but each example only shows one function being initialized. Do I have to start up multiple "workers" to handle multiple functions?

    Read the article

  • How do I create a check constraint?

    - by Zack Peterson
    Please imagine this small database... Diagram Tables Volunteer Event Shift EventVolunteer ========= ===== ===== ============== Id Id Id EventId Name Name EventId VolunteerId Email Location VolunteerId Phone Day Description Comment Description Start End Associations Volunteers may sign up for multiple events. Events may be staffed by multiple volunteers. An event may have multiple shifts. A shift belongs to only a single event. A shift may be staffed by only a single volunteer. A volunteer may staff multiple shifts. Check Constraints Can I create a check constraint to enforce that no shift is staffed by a volunteer that's not signed up for that shift's event? Can I create a check constraint to enforce that two overlapping shifts are never staffed by the same volunteer?

    Read the article

  • iphone core data app crash

    - by satyam
    I'm able to complete my iphone app using core data internally. But for the first time when I'm running in simulator or on device its crashing with following error: 2010-03-18 10:55:41.785 CrData[1605:4603] Unresolved error Error Domain=NSCocoaErrorDomain Code=513 UserInfo=0x50448d0 "Operation could not be completed. (Cocoa error 513.)", { NSUnderlyingException = Error validating url for store; } When I run the app again in simulator or on device, its running perfectly. I'm not able to identify the exact problem. Can some one guide me on how to proceed further???

    Read the article

  • How to use unicode inside an xpath string? (UnicodeEncodeError)

    - by Gj
    I'm using xpath in Selenium RC via the Python api. I need to click an a element who's text is "Submit »" Here's the error that I'm getting: In [18]: sel.click(u"xpath=//a[text()='Submit \xbb')]") ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (1121, 0)) --------------------------------------------------------------------------- Exception Traceback (most recent call last) /Users/me/<ipython console> in <module>() /Users/me/selenium.pyc in click(self, locator) 282 'locator' is an element locator 283 """ --> 284 self.do_command("click", [locator,]) 285 286 /Users/me/selenium.pyc in do_command(self, verb, args) 213 #print "Selenium Result: " + repr(data) + "\n\n" 214 if (not data.startswith('OK')): --> 215 raise Exception, data 216 return data 217 <type 'str'>: (<type 'exceptions.UnicodeEncodeError'>, UnicodeEncodeError('ascii', u"ERROR: Invalid xpath [2]: //a[text()='Submit \xbb')]", 45, 46, 'ordinal not in range(128)'))

    Read the article

  • android activity class does not exists?

    - by user975234
    I have been developing a project in eclipse for an android app. An error which i frequently get is that of- activity class does not exist. But when i just save the manifest file once again the error vanishes and the program runs correctly. Why then do i get the same error again and again. ? Console error: [2011-11-18 15:08:38 - link] Starting activity acb.abc.LinkActivity on device emulator-5554 [2011-11-18 15:08:40 - link] ActivityManager: Starting: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] cmp=acb.abc/.LinkActivity } [2011-11-18 15:08:40 - link] New package not yet registered with the system. Waiting 3 seconds before next attempt. [2011-11-18 15:08:40 - link] ActivityManager: Error: Activity class {acb.abc/acb.abc.LinkActivity} does not exist.

    Read the article

  • Strange behaviour of switch case with boolean value

    - by Nikhil Agrawal
    My question is not about how to solve this error(I already solved it) but why is this error with boolean value. My function is private string NumberToString(int number, bool flag) { string str; switch(flag) { case true: str = number.ToString("00"); break; case false: str = number.ToString("0000"); break; } return str; } Error is Use of unassigned local variable 'str'. Bool can only take true or false. So it will populate str in either case. Then why this error? Moreover this error is gone if along with true and false case I add a default case, but still what can a bool hold apart from true and false? Why this strange behaviour with bool variable?

    Read the article

  • Singleton code linker errors in vc 9.0. Runs fine in linux compiled with gcc

    - by user306560
    I have a simple logger that is implemented as a singleton. It works like i want when I compile and run it with g++ in linux but when I compile in Visual Studio 9.0 with vc++ I get the following errors. Is there a way to fix this? I don't mind changing the logger class around, but I would like to avoid changing how it is called. 1>Linking... 1>loggerTest.obj : error LNK2005: "public: static class Logger * __cdecl Logger::getInstance(void)" (?getInstance@Logger@@SAPAV1@XZ) already defined in Logger.obj 1>loggerTest.obj : error LNK2005: "public: void __thiscall Logger::log(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &)" (?log@Logger@@QAEXABV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) already defined in Logger.obj 1>loggerTest.obj : error LNK2005: "public: void __thiscall Logger::closeLog(void)" (?closeLog@Logger@@QAEXXZ) already defined in Logger.obj 1>loggerTest.obj : error LNK2005: "private: static class Logger * Logger::_instance" (?_instance@Logger@@0PAV1@A) already defined in Logger.obj 1>Logger.obj : error LNK2001: unresolved external symbol "private: static class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > Logger::_path" (?_path@Logger@@0V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@A) 1>loggerTest.obj : error LNK2001: unresolved external symbol "private: static class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > Logger::_path" (?_path@Logger@@0V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@A) 1>Logger.obj : error LNK2001: unresolved external symbol "private: static class boost::mutex Logger::_mutex" (?_mutex@Logger@@0Vmutex@boost@@A) 1>loggerTest.obj : error LNK2001: unresolved external symbol "private: static class boost::mutex Logger::_mutex" (?_mutex@Logger@@0Vmutex@boost@@A) 1>Logger.obj : error LNK2001: unresolved external symbol "private: static class std::basic_ofstream<char,struct std::char_traits<char> > Logger::_log" (?_log@Logger@@0V?$basic_ofstream@DU?$char_traits@D@std@@@std@@A) 1>loggerTest.obj : error LNK2001: unresolved external symbol "private: static class std::basic_ofstream<char,struct std::char_traits<char> > Logger::_log" (?_log@Logger@@0V?$basic_ofstream@DU?$char_traits@D@std@@@std@@A) The code, three files Logger.h Logger.cpp test.cpp #ifndef __LOGGER_CPP__ #define __LOGGER_CPP__ #include "Logger.h" Logger* Logger::_instance = 0; //string Logger::_path = "log"; //ofstream Logger::_log; //boost::mutex Logger::_mutex; Logger* Logger::getInstance(){ { boost::mutex::scoped_lock lock(_mutex); if(_instance == 0) { _instance = new Logger; _path = "log"; } } //mutex return _instance; } void Logger::log(const std::string& msg){ { boost::mutex::scoped_lock lock(_mutex); if(!_log.is_open()){ _log.open(_path.c_str()); } if(_log.is_open()){ _log << msg.c_str() << std::endl; } } } void Logger::closeLog(){ Logger::_log.close(); } #endif ` ... #ifndef __LOGGER_H__ #define __LOGGER_H__ #include <iostream> #include <string> #include <fstream> #include <boost/thread/mutex.hpp> #include <boost/thread.hpp> using namespace std; class Logger { public: static Logger* getInstance(); void log(const std::string& msg); void closeLog(); protected: Logger(){} private: static Logger* _instance; static string _path; static bool _logOpen; static ofstream _log; static boost::mutex _mutex; //check mutable }; #endif test.cpp ` #include <iostream> #include "Logger.cpp" using namespace std; int main(int argc, char *argv[]) { Logger* log = Logger::getInstance(); log->log("hello world\n"); return 0; }

    Read the article

  • C#/MonoDevelop: GTK MessageDialogs require a doubleclick to close - why?

    - by Connel
    I'm a newbie programmer writting a program in MonoDevelop in C# and have a porblem with my gtk MessageDialogs. The close button on the window boarders of my GTK Message dialogues require a double click to actually close them. The close button on the dialogue its self works fine. Could someone please tell me how I can fix this below is the code: if (fchDestination.CurrentFolder == fchTarget.CurrentFolder) { MessageDialog msdSame = new MessageDialog(this, DialogFlags.Modal, MessageType.Error, ButtonsType.Close, "Destination directory cannot be the same as the target directory"); msdSame.Title="Error"; if ((ResponseType) msdSame.Run() == ResponseType.Close) { msdSame.Destroy(); } return; } if (fchTarget.CurrentFolder.StartsWith(fchDestination.CurrentFolder)) { MessageDialog msdContains = new MessageDialog(this, DialogFlags.Modal, MessageType.Error, ButtonsType.Close, "error"); msdContains.Title="Error"; if ((ResponseType) msdContains.Run() == ResponseType.Close) { msdContains.Destroy(); } return; }

    Read the article

  • Command does not execute in crontab while command itself works just fine

    - by fuzzybee
    I have this script from Colin Johnson on Github - https://github.com/colinbjohnson/aws-missing-tools/tree/master/ec2-automate-backup It seems great. I have modified it to send email to myself every time an EBS snapshot is created or deleted. The following works like a charm ec2-automate-backup.sh -v "vol-myvolumeid" -k 3 However, it does not execute at all as part of my crontab (I didn't receive any emails) #some command that got commented out */5 * * * * ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3; * * * * * date /root/logs/crontab.log; */5 * * * * date /root/logs/crontab2.log Please note that the 2nd and 3rd execute just fines as I can see the date and time in log files. What could I have missed here? The full ec2-automate-backup.sh is as follows: #!/bin/bash - # Author: Colin Johnson / [email protected] # Date: 2012-09-24 # Version 0.1 # License Type: GNU GENERAL PUBLIC LICENSE, Version 3 # #confirms that executables required for succesful script execution are available prerequisite_check() { for prerequisite in basename ec2-create-snapshot ec2-create-tags ec2-describe-snapshots ec2-delete-snapshot date do #use of "hash" chosen as it is a shell builtin and will add programs to hash table, possibly speeding execution. Use of type also considered - open to suggestions. hash $prerequisite &> /dev/null if [[ $? == 1 ]] #has exits with exit status of 70, executable was not found then echo "In order to use `basename $0`, the executable \"$prerequisite\" must be installed." 1>&2 | mailx -s "Error happened 0" [email protected] ; exit 70 fi done } #get_EBS_List gets a list of available EBS instances depending upon the selection_method of EBS selection that is provided by user input get_EBS_List() { case $selection_method in volumeid) if [[ -z $volumeid ]] then echo "The selection method \"volumeid\" (which is $app_name's default selection_method of operation or requested by using the -s volumeid parameter) requires a volumeid (-v volumeid) for operation. Correct usage is as follows: \"-v vol-6d6a0527\",\"-s volumeid -v vol-6d6a0527\" or \"-v \"vol-6d6a0527 vol-636a0112\"\" if multiple volumes are to be selected." 1>&2 | mailx -s "Error happened 1" [email protected] ; exit 64 fi ebs_selection_string="$volumeid" ;; tag) if [[ -z $tag ]] then echo "The selected selection_method \"tag\" (-s tag) requires a valid tag (-t key=value) for operation. Correct usage is as follows: \"-s tag -t backup=true\" or \"-s tag -t Name=my_tag.\"" 1>&2 | mailx -s "Error happened 2" [email protected] ; exit 64 fi ebs_selection_string="--filter tag:$tag" ;; *) echo "If you specify a selection_method (-s selection_method) for selecting EBS volumes you must select either \"volumeid\" (-s volumeid) or \"tag\" (-s tag)." 1>&2 | mailx -s "Error happened 3" [email protected] ; exit 64 ;; esac #creates a list of all ebs volumes that match the selection string from above ebs_backup_list_complete=`ec2-describe-volumes --show-empty-fields --region $region $ebs_selection_string 2>&1` #takes the output of the previous command ebs_backup_list_result=`echo $?` if [[ $ebs_backup_list_result -gt 0 ]] then echo -e "An error occured when running ec2-describe-volumes. The error returned is below:\n$ebs_backup_list_complete" 1>&2 | mailx -s "Error happened 4" [email protected] ; exit 70 fi ebs_backup_list=`echo "$ebs_backup_list_complete" | grep ^VOLUME | cut -f 2` #code to right will output list of EBS volumes to be backed up: echo -e "Now outputting ebs_backup_list:\n$ebs_backup_list" } create_EBS_Snapshot_Tags() { #snapshot tags holds all tags that need to be applied to a given snapshot - by aggregating tags we ensure that ec2-create-tags is called only onece snapshot_tags="" #if $name_tag_create is true then append ec2ab_${ebs_selected}_$date_current to the variable $snapshot_tags if $name_tag_create then ec2_snapshot_resource_id=`echo "$ec2_create_snapshot_result" | cut -f 2` snapshot_tags="$snapshot_tags --tag Name=ec2ab_${ebs_selected}_$date_current" fi #if $purge_after_days is true, then append $purge_after_date to the variable $snapshot_tags if [[ -n $purge_after_days ]] then snapshot_tags="$snapshot_tags --tag PurgeAfter=$purge_after_date --tag PurgeAllow=true" fi #if $snapshot_tags is not zero length then set the tag on the snapshot using ec2-create-tags if [[ -n $snapshot_tags ]] then echo "Tagging Snapshot $ec2_snapshot_resource_id with the following Tags:" ec2-create-tags $ec2_snapshot_resource_id --region $region $snapshot_tags #echo "Snapshot tags successfully created" | mailx -s "Snapshot tags successfully created" [email protected] fi } date_command_get() { #finds full path to date binary date_binary_full_path=`which date` #command below is used to determine if date binary is gnu, macosx or other date_binary_file_result=`file -b $date_binary_full_path` case $date_binary_file_result in "Mach-O 64-bit executable x86_64") date_binary="macosx" ;; "ELF 64-bit LSB executable, x86-64, version 1 (SYSV)"*) date_binary="gnu" ;; *) date_binary="unknown" ;; esac #based on the installed date binary the case statement below will determine the method to use to determine "purge_after_days" in the future case $date_binary in gnu) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; macosx) date_command="date -v+${purge_after_days}d -u +%Y-%m-%d" ;; unknown) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; *) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; esac } purge_EBS_Snapshots() { #snapshot_tag_list is a string that contains all snapshots with either the key PurgeAllow or PurgeAfter set snapshot_tag_list=`ec2-describe-tags --show-empty-fields --region $region --filter resource-type=snapshot --filter key=PurgeAllow,PurgeAfter` #snapshot_purge_allowed is a list of all snapshot_ids with PurgeAllow=true snapshot_purge_allowed=`echo "$snapshot_tag_list" | grep .*PurgeAllow'\t'true | cut -f 3` for snapshot_id_evaluated in $snapshot_purge_allowed do #gets the "PurgeAfter" date which is in UTC with YYYY-MM-DD format (or %Y-%m-%d) purge_after_date=`echo "$snapshot_tag_list" | grep .*$snapshot_id_evaluated'\t'PurgeAfter.* | cut -f 5` #if purge_after_date is not set then we have a problem. Need to alter user. if [[ -z $purge_after_date ]] #Alerts user to the fact that a Snapshot was found with PurgeAllow=true but with no PurgeAfter date. then echo "A Snapshot with the Snapshot ID $snapshot_id_evaluated has the tag \"PurgeAllow=true\" but does not have a \"PurgeAfter=YYYY-MM-DD\" date. $app_name is unable to determine if $snapshot_id_evaluated should be purged." 1>&2 | mailx -s "Error happened 5" [email protected] else #convert both the date_current and purge_after_date into epoch time to allow for comparison date_current_epoch=`date -j -f "%Y-%m-%d" "$date_current" "+%s"` purge_after_date_epoch=`date -j -f "%Y-%m-%d" "$purge_after_date" "+%s"` #perform compparison - if $purge_after_date_epoch is a lower number than $date_current_epoch than the PurgeAfter date is earlier than the current date - and the snapshot can be safely removed if [[ $purge_after_date_epoch < $date_current_epoch ]] then echo "The snapshot \"$snapshot_id_evaluated\" with the Purge After date of $purge_after_date will be deleted." ec2-delete-snapshot --region $region $snapshot_id_evaluated echo "Old snapshots successfully deleted for $volumeid" | mailx -s "Old snapshots successfully deleted for $volumeid" [email protected] fi fi done } #calls prerequisitecheck function to ensure that all executables required for script execution are available prerequisite_check app_name=`basename $0` #sets defaults selection_method="volumeid" region="ap-southeast-1" #date_binary allows a user to set the "date" binary that is installed on their system and, therefore, the options that will be given to the date binary to perform date calculations date_binary="" #sets the "Name" tag set for a snapshot to false - using "Name" requires that ec2-create-tags be called in addition to ec2-create-snapshot name_tag_create=false #sets the Purge Snapshot feature to false - this feature will eventually allow the removal of snapshots that have a "PurgeAfter" tag that is earlier than current date purge_snapshots=false #handles options processing while getopts :s:r:v:t:k:pn opt do case $opt in s) selection_method="$OPTARG";; r) region="$OPTARG";; v) volumeid="$OPTARG";; t) tag="$OPTARG";; k) purge_after_days="$OPTARG";; n) name_tag_create=true;; p) purge_snapshots=true;; *) echo "Error with Options Input. Cause of failure is most likely that an unsupported parameter was passed or a parameter was passed without a corresponding option." 1>&2 ; exit 64;; esac done #sets date variable date_current=`date -u +%Y-%m-%d` #sets the PurgeAfter tag to the number of days that a snapshot should be retained if [[ -n $purge_after_days ]] then #if the date_binary is not set, call the date_command_get function if [[ -z $date_binary ]] then date_command_get fi purge_after_date=`$date_command` echo "Snapshots taken by $app_name will be eligible for purging after the following date: $purge_after_date." fi #get_EBS_List gets a list of EBS instances for which a snapshot is desired. The list of EBS instances depends upon the selection_method that is provided by user input get_EBS_List #the loop below is called once for each volume in $ebs_backup_list - the currently selected EBS volume is passed in as "ebs_selected" for ebs_selected in $ebs_backup_list do ec2_snapshot_description="ec2ab_${ebs_selected}_$date_current" ec2_create_snapshot_result=`ec2-create-snapshot --region $region -d $ec2_snapshot_description $ebs_selected 2>&1` if [[ $? != 0 ]] then echo -e "An error occured when running ec2-create-snapshot. The error returned is below:\n$ec2_create_snapshot_result" 1>&2 ; exit 70 else ec2_snapshot_resource_id=`echo "$ec2_create_snapshot_result" | cut -f 2` echo "Snapshots successfully created for volume $volumeid" | mailx -s "Snapshots successfully created for $volumeid" [email protected] fi create_EBS_Snapshot_Tags done #if purge_snapshots is true, then run purge_EBS_Snapshots function if $purge_snapshots then echo "Snapshot Purging is Starting Now." purge_EBS_Snapshots fi cron log Oct 23 10:24:01 ip-10-130-153-227 CROND[28214]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:24:01 ip-10-130-153-227 CROND[28215]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:25:01 ip-10-130-153-227 CROND[28228]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:25:01 ip-10-130-153-227 CROND[28229]: (root) CMD (date >> /root/logs/crontab2.log) Oct 23 10:26:01 ip-10-130-153-227 CROND[28239]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:27:01 ip-10-130-153-227 CROND[28247]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:27:01 ip-10-130-153-227 CROND[28248]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:28:01 ip-10-130-153-227 CROND[28263]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:29:01 ip-10-130-153-227 CROND[28275]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:30:01 ip-10-130-153-227 CROND[28292]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:30:01 ip-10-130-153-227 CROND[28293]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:30:01 ip-10-130-153-227 CROND[28294]: (root) CMD (date >> /root/logs/crontab2.log) Oct 23 10:31:01 ip-10-130-153-227 CROND[28312]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:32:01 ip-10-130-153-227 CROND[28319]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:33:01 ip-10-130-153-227 CROND[28325]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:33:01 ip-10-130-153-227 CROND[28324]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:34:01 ip-10-130-153-227 CROND[28345]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:35:01 ip-10-130-153-227 CROND[28362]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:35:01 ip-10-130-153-227 CROND[28363]: (root) CMD (date >> /root/logs/crontab2.log) Mails to root From [email protected] Tue Oct 23 06:00:01 2012 Return-Path: <[email protected]> Date: Tue, 23 Oct 2012 06:00:01 GMT From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <root@ip-10-130-153-227> root ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3 Content-Type: text/plain; charset=UTF-8 Auto-Submitted: auto-generated X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Status: R /bin/sh: root: command not found

    Read the article

  • Learnings from trying to write better software: Loud errors from the very start

    - by theo.spears
    Microsoft made a very small number of backwards incompatible changes between .NET 1.1 and 2.0, because they wanted to make it as easy and safe as possible to port applications to the new runtime. (Here’s a list.) However, one thing they did change was what happens when a background thread fails with an unhanded exception - in .NET 1.1 nothing happened, the thread terminated, and the application continued oblivious. Try the same trick in .NET 2.0 and the entire application, including all threads, will rudely terminate. There are three reasons for this. Firstly if a background thread has crashed, it may have left the entire application in an inconsistent state, in a way that will affect other threads. It’s better to terminate the entire application than continue and have the application perform actions based on a broken state, for example take customer orders, or write corrupt files to disk.  Secondly, during software development, it is far better for errors to be loud and obtrusive. Even if you have unit tests and integration tests (and you should), a key part of ensuring software works properly is to actually try using it, both through systematic testing and through the casual use all software gets by its developers during use. Subtle errors are easy to miss if you are not actually doing real work using the application, loud errors are obvious. Thirdly, and most importantly, even if catching and swallowing exceptions indiscriminately doesn't cause any problems in your application, the presence of unexpected exceptions shows you do not fully understand the behavior of your code. The currently released version of your application may be absolutely correct. However, because your mental model of the behavior is wrong, any future change you make to the program could and probably will introduce critical errors.  This applies to more than just exceptions causing threads to exit, any unexpected state should make the application blow up in an un-ignorable way. The worst thing you can do is silently swallow errors and continue. And let's be clear, writing to a log file does not count as blowing up in an un-ignorable way.  This is all simple as long as the call stack only contains your code, but when your functions start to be called by third party or .NET framework code, it's surprisingly easy for exceptions to start vanishing. Let's look at two examples.   1. Windows forms drag drop events  Usually if you throw an exception from a winforms event handler it will bring up the "application has crashed" dialog with abort and continue options. This is a good default behavior - the error is big and loud, but it is possible for the user to ignore the error and hopefully save their data, if somehow this bug makes it past testing. However drag and drop are different - throw an exception from one of these and it will just be silently swallowed with no explanation.  By the way, it's not just drag and drop events. Timer events do it too.  You can research how exceptions are treated in different handlers and code appropriately, but the safest and most user friendly approach is to always catch exceptions in your event handlers and show your own error message. I'll talk about one good approach to handling these exceptions at the end of this post.   2. SSMS integration for SQL Tab Magic  A while back wrote an SSMS add-in called SQL Tab Magic (learn more about the process here). It works by listening to certain SSMS events and remembering what documents are opened and closed. I deployed it internally and it was used for a few months by a number of people without problems, so I was reasonably confident in its quality. Before releasing I made a few cleanups, including introducing error reporting. Bam. A few days later I was looking at over 1,000 error reports in my inbox. In turns out I wasn't handling table designers properly. The exceptions were there, but again SSMS was helpfully swallowing them all for me, so I was blissfully unaware. Had I made my errors loud from the start, I would have noticed these issues long before and fixed them.   Handling exceptions  Now you are systematically catching exceptions throughout your application, you need to do something with them. I've tried 3 options: log them, alert the user, and automatically send them home.  There are a few good options for logging in .NET. The most widespread is Apache log4net, which provides a very capable and configurable logging framework. There is also NLog which has a compatible interface, with a greater emphasis on fluent rather than XML configuration.  Alerting the user serves two purposes. Firstly it means they understand their action has failed to they don't just assume it worked (Silent file copy failure is a problem if you then delete the originals) or that they should keep waiting for a background task to complete. Secondly, it means the users can report the bug to your support team, and then you can fix it. This means the message you show the user should contain the information you need as a developer to identify and fix it. And the user will probably just send you a screenshot of the dialog, so it shouldn't be hidden by scroll bars.  This leads us to the third option, automatically sending error reports home. By automatic I mean with minimal effort on the part of the user, rather than doing it silently behind their backs. The advantage of this is you can send back far more detailed and precise information than you can expect a user to include in an email, and by making it easier to report errors, you make it more likely users will do so.  We do this using a great tool called SmartAssembly (full disclosure: this is a product made by Red Gate). It captures complete stack traces including the values of all local variables and then allows the user to send all this information back with a single click. We also capture log files to help understand what lead up to the error. We then use the free SmartAssembly Sync for Jira to dedupe these reports and raise them as bugs in our bug tracking system.  The combined effect of loud errors during development and then automatic error reporting once software is deployed allows us to find and fix more bugs, correct misunderstandings on how our software works, and overall is a key piece in delivering higher quality software. However it is no substitute for having motivated cunning testers in the building - and we're looking to hire more of those too.   If you found this post interesting you should follow me on twitter.  

    Read the article

  • How do I concatenate a lot of files into one inside Hadoop, with no mapping or reduction

    - by Leonard
    I'm trying to combine multiple files in multiple input directories into a single file, for various odd reasons I won't go into. My initial try was to write a 'nul' mapper and reducer that just copied input to output, but that failed. My latest try is: vcm_hadoop lester jar /vcm/home/apps/hadoop/contrib/streaming/hadoop-*-streaming.jar -input /cruncher/201004/08/17/00 -output /lcuffcat9 -mapper /bin/cat -reducer NONE but I end up with multiple output files anyway. Anybody know how I can coax everything into a single output file?

    Read the article

  • Creating Mock object of Interface with type-hint in method fails on PHPUnit

    - by Mark
    I created the following interface: <?php interface Action { public function execute(\requests\Request $request, array $params); } Then I try to make a Mock object of this interface with PHPUnit 3.4, but I get the following error: Fatal error: Declaration of Mock_Action_b389c0b1::execute() must be compatible with that of Action::execute() in D:\Xampp\xampp\php\PEAR\PHPUnit\Framework\TestCase.php(1121) : eval()'d code on line 2 I looked through the stack trace I got from PHPUnit and found that it creates a Mock object that implements the interface Action, but creates the execute method in the following way: <?php public function execute($request, array $params) As you can see, PHPUnit takes over the array type-hint, but forgets about \requests\Request. Which obviously leads to an error. Does anyone knows a workaround for this error? I also tried it without namespaces, but I still get the same error.

    Read the article

  • Streaming Audio from A URL in Android using MediaPlayer?

    - by Sena Gbeckor-Kove
    Hi, I've been trying to stream mp3's over http using Android's built in MediaPlayer class. The documentation would suggest to me that this should be as easy as : MediaPlayer mp = new MediaPlayer(); mp.setDataSource(URL_OF_FILE); mp.prepare(); mp.start(); However I am getting the following repeatedly. I have tried different URLs as well. Please don't tell me that streaming doesn't work on mp3's. E/PlayerDriver( 31): Command PLAYER_SET_DATA_SOURCE completed with an error or info PVMFErrNotSupported W/PlayerDriver( 31): PVMFInfoErrorHandlingComplete E/MediaPlayer( 198): error (1, -4) E/MediaPlayer( 198): start called in state 0 E/MediaPlayer( 198): error (-38, 0) E/MediaPlayer( 198): Error (1,-4) E/MediaPlayer( 198): Error (-38,0) Any help much appreciated, thanks S

    Read the article

  • Key ATG architecture principles

    - by Glen Borkowski
    Overview The purpose of this article is to describe some of the important foundational concepts of ATG.  This is not intended to cover all areas of the ATG platform, just the most important subset - the ones that allow ATG to be extremely flexible, configurable, high performance, etc.  For more information on these topics, please see the online product manuals. Modules The first concept is called the 'ATG Module'.  Simply put, you can think of modules as the building blocks for ATG applications.  The ATG development team builds the out of the box product using modules (these are the 'out of the box' modules).  Then, when a customer is implementing their site, they build their own modules that sit 'on top' of the out of the box ATG modules.  Modules can be very simple - containing minimal definition, and perhaps a small amount of configuration.  Alternatively, a module can be rather complex - containing custom logic, database schema definitions, configuration, one or more web applications, etc.  Modules generally will have dependencies on other modules (the modules beneath it).  For example, the Commerce Reference Store module (CRS) requires the DCS (out of the box commerce) module. Modules have a ton of value because they provide a way to decouple a customers implementation from the out of the box ATG modules.  This allows for a much easier job when it comes time to upgrade the ATG platform.  Modules are also a very useful way to group functionality into a single package which can be leveraged across multiple ATG applications. One very important thing to understand about modules, or more accurately, ATG as a whole, is that when you start ATG, you tell it what module(s) you want to start.  One of the first things ATG does is to look through all the modules you specified, and for each one, determine a list of modules that are also required to start (based on each modules dependencies).  Once this final, ordered list is determined, ATG continues to boot up.  One of the outputs from the ordered list of modules is that each module can contain it's own classes and configuration.  During boot, the ordered list of modules drives the unified classpath and configpath.  This is what determines which classes override others, and which configuration overrides other configuration.  Think of it as a layered approach. The structure of a module is well defined.  It simply looks like a folder in a filesystem that has certain other folders and files within it.  Here is a list of items that can appear in a module: MyModule: META-INF - this is required, along with a file called MANIFEST.MF which describes certain properties of the module.  One important property is what other modules this module depends on. config - this is typically present in most modules.  It defines a tree structure (folders containing properties files, XML, etc) that maps to ATG components (these are described below). lib - this contains the classes (typically in jarred format) for any code defined in this module j2ee - this is where any web-apps would be stored. src - in case you want to include the source code for this module, it's standard practice to put it here sql - if your module requires any additions to the database schema, you should place that schema here Here's a screenshots of a module: Modules can also contain sub-modules.  A dot-notation is used when referring to these sub-modules (i.e. MyModule.Versioned, where Versioned is a sub-module of MyModule). Finally, it is important to completely understand how modules work if you are going to be able to leverage them effectively.  There are many different ways to design modules you want to create, some approaches are better than others, especially if you plan to share functionality between multiple different ATG applications. Components A component in ATG can be thought of as a single item that performs a certain set of related tasks.  An example could be a ProductViews component - used to store information about what products the current customer has viewed.  Components have properties (also called attributes).  The ProductViews component could have properties like lastProductViewed (stores the ID of the last product viewed) or productViewList (stores the ID's of products viewed in order of their being viewed).  The previous examples of component properties would typically also offer get and set methods used to retrieve and store the property values.  Components typically will also offer other types of useful methods aside from get and set.  In the ProductViewed component, we might want to offer a hasViewed method which will tell you if the customer has viewed a certain product or not. Components are organized in a tree like hierarchy called 'nucleus'.  Nucleus is used to locate and instantiate ATG Components.  So, when you create a new ATG component, it will be able to be found 'within' nucleus.  Nucleus allows ATG components to reference one another - this is how components are strung together to perform meaningful work.  It's also a mechanism to prevent redundant configuration - define it once and refer to it from everywhere. Here is a screenshot of a component in nucleus:  Components can be extremely simple (i.e. a single property with a get method), or can be rather complex offering many properties and methods.  To be an ATG component, a few things are required: a class - you can reference an existing out of the box class or you could write your own a properties file - this is used to define your component the above items must be located 'within' nucleus by placing them in the correct spot in your module's config folder Within the properties file, you will need to point to the class you want to use: $class=com.mycompany.myclass You may also want to define the scope of the class (request, session, or global): $scope=session In summary, ATG Components live in nucleus, generally have links to other components, and provide some meaningful type of work.  You can configure components as well as extend their functionality by writing code. Repositories Repositories (a.k.a. Data Anywhere Architecture) is the mechanism that ATG uses to access data primarily stored in relational databases, but also LDAP or other backend systems.  ATG applications are required to be very high performance, and data access is critical in that if not handled properly, it could create a bottleneck.  ATG's repository functionality has been around for a long time - it's proven to be extremely scalable.  Developers new to ATG need to understand how repositories work as this is a critical aspect of the ATG architecture.   Repositories essentially map relational tables to objects in ATG, as well as handle caching.  ATG defines many repositories out of the box (i.e. user profile, catalog, orders, etc), and this is comprised of both the underlying database schema along with the associated repository definition files (XML).  It is fully expected that implementations will extend / change the out of the box repository definitions, so there is a prescribed approach to doing this.  The first thing to be sure of is to encapsulate your repository definition additions / changes within your own module (as described above).  The other important best practice is to never modify the out of the box schema - in other words, don't add columns to existing ATG tables, just create your own new tables.  These will help ensure you can easily upgrade your application at a later date. xml-combination As mentioned earlier, when you start ATG, the order of the modules will determine the final configpath.  Files within this configpath are 'layered' such that modules on top can override configuration of modules below it.  This is the same concept for repository definition files.  If you want to add a few properties to the out of the box user profile, you simply need to create an XML file containing only your additions, and place it in the correct location in your module.  At boot time, your definition will be combined (hence the term xml-combination) with the lower, out of the box modules, with the result being a user profile that contains everything (out of the box, plus your additions).  Aside from just adding properties, there are also ways to remove and change properties. types of properties Aside from the normal 'database backed' properties, there are a few other interesting types: transient properties - these are properties that are in memory, but not backed by any database column.  These are useful for temporary storage. java-backed properties - by nature, these are transient, but in addition, when you access this property (by called the get method) instead of looking up a piece of data, it performs some logic and returns the results.  'Age' is a good example - if you're storing a birth date on the profile, but your business rules are defined in terms of someones age, you could create a simple java-backed property to look at the birth date and compare it to the current date, and return the persons age. derived properties - this is what allows for inheritance within the repository structure.  You could define a property at the category level, and have the product inherit it's value as well as override it.  This is useful for setting defaults, with the ability to override. caching There are a number of different caching modes which are useful at different times depending on the nature of the data being cached.  For example, the simple cache mode is useful for things like user profiles.  This is because the user profile will typically only be used on a single instance of ATG at one time.  Simple cache mode is also useful for read-only types of data such as the product catalog.  Locked cache mode is useful when you need to ensure that only one ATG instance writes to a particular item at a time - an example would be a customers order.  There are many options in terms of configuring caching which are outside the scope of this article - please refer to the product manuals for more details. Other important concepts - out of scope for this article There are a whole host of concepts that are very important pieces to the ATG platform, but are out of scope for this article.  Here's a brief description of some of them: formhandlers - these are ATG components that handle form submissions by users. pipelines - these are configurable chains of logic that are used for things like handling a request (request pipeline) or checking out an order. special kinds of repositories (versioned, files, secure, ...) - there are a couple different types of repositories that are used in various situations.  See the manuals for more information. web development - JSP/ DSP tag library - ATG provides a traditional approach to developing web applications by providing a tag library called the DSP library.  This library is used throughout your JSP pages to interact with all the ATG components. messaging - a message sub-system used as another way for components to interact. personalization - ability for business users to define a personalized user experience for customers.  See the other blog posts related to personalization.

    Read the article

  • Pear MDB2 class and raiserror exceptions in MSSQL

    - by drholzmichl
    Hi, in MSSQL it's possible to raise an error with raiserror(). I want to use a severity, which doesn't interrupt the connection. This error is raised in a stored procedure. In SQL Management Studio all is fine and I get my error code when executing this SP. But when trying to execute this SP via MDB2 in PHP5 this doesn't work. All I get is an empty array. MDB2 object is created via (including needed options): $db =& MDB2::connect($dsn); $db->setFetchMode(MDB2_FETCHMODE_ASSOC); $db->setOption('portability',MDB2_PORTABILITY_ALL ^ MDB2_PORTABILITY_EMPTY_TO_NULL); The following works (I get a PEAR error): $db->query("RAISERROR('test',11,0);"); But when calling a stored procedure which raises this error via $db->query("EXEC sp_raise_error"); there is not output. What's wrong?

    Read the article

  • DNSSD Crash error tells me to use native Avahi API instead of Bonjour layer. How do I do this?

    - by Poul
    I get this error trying to run the example script here: http://github.com/tenderlove/dnssd/blob/master/sample/register.rb It says I am using a Bonjour Compatibility Layer and I should use the native api. How do I switch the dnssd ruby gem over to using this? *** WARNING *** The program 'ruby1.8' called 'DNSServiceAddRecord()' which is not supported (or only supported partially) in the Apple Bonjour compatibility layer of Avahi. *** WARNING *** Please fix your application to use the native API of Avahi! *** WARNING *** For more information see <http://0pointer.de/avahi-compat?s=libdns_sd&e=ruby1.8&f=DNSServiceAddRecord>

    Read the article

  • Adventures in Lab Management Configuration: Part 2 of 3

    - by Enrique Lima
    The first post was the high level overview. Now it is time for the details on what was done to the existing CMMI Project based on CMMI v 4.2. The first step was to go into Visual Studio, then from the Team Project Collection Settings and then to the Process Template Manager.  Once there, it was a matter of selecting the appropriate template (MSF for CMMI Process Improvement v5.0) and download to a point I could reference later (for example C:\Templates). Then on to using the steps from the guidance post. Since I was using an x64 deployment, I will make reference to the path as <toolpath>, however the actual path to reference in a 64-bit environment is “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE”. As I mentioned on the previous post, make sure to first perform a backup of the Configuration, Collection and Warehouse DBs.  If you did not apply any changes to the names and such, then you will find those as tfs_Configuration, tfs_DefaultCollection and tfs_Warehouse. Now, the work needed with the witadmin tool: That includes the uploading of the structures that differ from v4.2 to v5.0 There is likely going to be an issue with the naming of some fields. For example, TFS 2010 likes something along the lines of “Area ID”, whereas TFS 2008 would have had it as “AreaID”.  So, this will need to be corrected.  Some posts will have you go through this after the errors pop up.  I would recommend doing this process prior to executing the importwitd process.  witadmin listfields /collection:<path to collection> > c:\ListFields.txt Review the following fields: AreaID, review the Name property and validate if it states “AreaID”, the you will need to rename the Name field to reflect “Area ID”. ExternalLinkCount, RelatedLinkCount, HyperLinkCount, AttachedFileCount and IterationID would be the other fields to check. To correct the issue, then execute the following: witadmin changefield /collection:<path to collection> /n:"System.ExternalLinkCount" /name:"External Link Count" Repeat for Area ID, Related Link Count, Hyperlink Count, Attached File Count and Iteration ID.  Once this is done, proceed with the commands below. witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\TestCase.xml" witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\SharedStep.xml" witadmin importcategories /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\categories.xml" Modifications to the Bug Definition: First step is to export the existing definition. witadmin exportwitd /collection<path to collection> /p:<project> /n:bug /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Make modifications to recently exported MyBug.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Repeat the process for the the Scenario or Requirement Type Definition witadmin exportwitd /collection<path to collection> /p:<project> /n:requirement /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Make modifications to recently exported MyRequirement.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Provide the Bug Field Mapping definition, after creating the file as specified here: http://msdn.microsoft.com/en-us/library/ff452591.aspx#TCMBugFieldMapping tcm bugfieldmapping /import /mappingfile:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\bugfieldmappings.xml" /collection:<path to collection> /teamproject:<project name>

    Read the article

  • How do I solve an unresolved external when using C++ Builder packages (with TForm based classes)?

    - by José Renato
    Hi, i'm working with Bulder C++ 6 and 2010, and i'm having this problem: http://stackoverflow.com/questions/2727001/how-do-i-solve-an-unresolved-external-when-using-c-builder-packages But, the difference here is that i'm using a FORM compiled inside the package, so, take the example above, but in addition i'm including a form classe, like TForm2: class TForm2 : public TForm { __published: // IDE-managed Components TButton *Button1; void __fastcall Button1Click(TObject *Sender); private: // User declarations public: // User declarations __fastcall TForm2(TComponent* Owner); }; //--------------------------------------------------------------------------- extern PACKAGE TForm2 *Form2; //--------------------------------------------------------------------------- So, when i'm trying to use this class in any project the linker stops and give me the unresolved external error. When i got that error i tried to include the word PACKAGE, like this: class PACKAGE TForm2 But, when i tried to compile the PACKAGE, the compiler stops with the unresolved external error: [ILINK32 Error] Error: Unresolved external '__fastcall Forms::TCustomForm::~TCustomForm()' referenced from c:\projects\UNIT2.OBJ How can i solve this problem? PS.: Sorry about the bad English.

    Read the article

  • Basis of definitions

    - by Yttrill
    Let us suppose we have a set of functions which characterise something: in the OO world methods characterising a type. In mathematics these are propositions and we have two kinds: axioms and lemmas. Axioms are assumptions, lemmas are easily derived from them. In C++ axioms are pure virtual functions. Here's the problem: there's more than one way to axiomatise a system. Given a set of propositions or methods, a subset of the propositions which is necessary and sufficient to derive all the others is called a basis. So too, for methods or functions, we have a desired set which must be defined, and typically every one has one or more definitions in terms of the others, and we require the programmer to provide instance definitions which are sufficient to allow all the others to be defined, and, if there is an overspecification, then it is consistent. Let me give an example (in Felix, Haskell code would be similar): class Eq[t] { virtual fun ==(x:t,y:t):bool => eq(x,y); virtual fun eq(x:t, y:t)=> x == y; virtual fun != (x:t,y:t):bool => not (x == y); axiom reflex(x:t): x == x; axiom sym(x:t, y:t): (x == y) == (y == x); axiom trans(x:t, y:t, z:t): implies(x == y and y == z, x == z); } Here it is clear: the programmer must define either == or eq or both. If both are defined, the definitions must be equivalent. Failing to define one doesn't cause a compiler error, it causes an infinite loop at run time. Defining both inequivalently doesn't cause an error either, it is just inconsistent. Note the axioms specified constrain the semantics of any definition. Given a definition of == either directly or via a definition of eq, then != is defined automatically, although the programmer might replace the default with something more efficient, clearly such an overspecification has to be consistent. Please note, == could also be defined in terms of !=, but we didn't do that. A characterisation of a partial or total order is more complex. It is much more demanding since there is a combinatorial explosion of possible bases. There is an reason to desire overspecification: performance. There also another reason: choice and convenience. So here, there are several questions: one is how to check semantics are obeyed and I am not looking for an answer here (way too hard!). The other question is: How can we specify, and check, that an instance provides at least a basis? And a much harder question: how can we provide several default definitions which depend on the basis chosen?

    Read the article

  • CodeIgniter model debugging errors

    - by Jono
    I am new to CodeIgniter, and I need a way to get more meaningful error messages. Specifically I am having trouble with some model relationships, but the error is vague. I am willing to try/install anything since I dont know how to fix this relationship. Is there a way to specify how verbose an error message is? Also, this could be related to DataMapper, but I cant tell. I dont care if they are logged or in the browser. In my browser error reads: An Error Was Encountered Unable to relate X with Y. Any more info would be great... which class, line number, a stacktrace. Increasing the log threshold did not help.

    Read the article

< Previous Page | 797 798 799 800 801 802 803 804 805 806 807 808  | Next Page >