Search Results

Search found 23347 results on 934 pages for 'key storage'.

Page 269/934 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • Easiest way to remove Keys from a 2D Array?

    - by dbemerlin
    Hi, I have an Array that looks like this: array( 0 => array( 'key1' => 'a', 'key2' => 'b', 'key3' => 'c' ), 1 => array( 'key1' => 'c', 'key2' => 'b', 'key3' => 'a' ), ... ) I need a function to get an array containing just a (variable) number of keys, i.e. reduce_array(array('key1', 'key3')); should return: array( 0 => array( 'key1' => 'a', 'key3' => 'c' ), 1 => array( 'key1' => 'c', 'key3' => 'a' ), ... ) What is the easiest way to do this? If possible without any additional helper function like array_filter or array_map as my coworkers already complain about me using too many functions. The source array will always have the given keys so it's not required to check for existance. Bonus points if the values are unique (the keys will always be related to each other, meaning that if key1 has value a then the other key(s) will always have value b). My current solution which works but is quite clumsy (even the name is horrible but can't find a better one): function get_unique_values_from_array_by_keys(array $array, array $keys) { $result = array(); $found = array(); if (count($keys) > 0) { foreach ($array as $item) { if (in_array($item[$keys[0]], $found)) continue; array_push($found, $item[$keys[0]]); $result_item = array(); foreach ($keys as $key) { $result_item[$key] = $item[$key]; } array_push($result, $result_item); } } return $result; } Addition: PHP Version is 5.1.6.

    Read the article

  • Improve Efficiency in Array comparison in Ruby

    - by user2985025
    Hi I am working on Ruby /cucumber and have an requirement to develop a comparison module/program to compare two files. Below are the requirements The project is a migration project . Data from one application is moved to another Need to compare the data from the existing application against the new ones. Solution : I have developed a comparison engine in Ruby for the above requirement. a) Get the data, de duplicated and sorted from both the DB's b) Put the data in a text file with "||" as delimiter c) Use the key columns (number) that provides a unique record in the db to compare the two files For ex File1 has 1,2,3,4,5,6 and file2 has 1,2,3,4,5,7 and the columns 1,2,3,4,5 are key columns. I use these key columns and compare 6 and 7 which results in a fail. Issue : The major issue we are facing here is if the mismatches are more than 70% for 100,000 records or more the comparison time is large. If the mismatches are less than 40% then comparison time is ok. Diff and Diff -LCS will not work in this case because we need key columns to arrive at accurate data comparison between two applications. Is there any other method to efficiently reduce the time if the mismatches are more thatn 70% for 100,000 records or more. Thanks

    Read the article

  • PhpMyAdmin Hangs On MySQL Error

    - by user75228
    I'm currently running PhpMyAdmin 4.0.10 (the latest version supporting PHP 4.2.X) on my Amazon EC2 connecting to a MySQL database on RDS. Everything works perfectly fine except actions that return a mysql error message. Whether I perform "any" kind of action that will return a mysql error, Phpmyadmin will hang with the yellow "Loading" box forever without displaying anything. For example, if I perform the following command in MySQL CLI : select * from 123; It instantly returns the following error : ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '123' at line 1 which is completely normal because table 123 doesn't exist. However, if I execute the exact same command in the "SQL" box in Phpmyadmin, after I click "Go" it'll display "Loading" and stops there forever. Has anyone ever encountered this kind of issue with Phpmyadmin? Is this a bug or I have something wrong with my config.inc.php? Any help would be much appreciated. I also noticed these error messages in my apache error logs : /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open Below are my config.inc.php settings : <?php /* vim: set expandtab sw=4 ts=4 sts=4: */ /** * phpMyAdmin sample configuration, you can use it as base for * manual configuration. For easier setup you can use setup/ * * All directives are explained in documentation in the doc/ folder * or at <http://docs.phpmyadmin.net/>. * * @package PhpMyAdmin */ /* * This is needed for cookie based authentication to encrypt password in * cookie */ $cfg['blowfish_secret'] = 'something_random'; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */ /* * Servers configuration */ $i = 0; /* * First server */ $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'cookie'; /* Server parameters */ $cfg['Servers'][$i]['host'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = true; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['AllowNoPassword'] = false; $cfg['LoginCookieValidity'] = '3600'; /* * phpMyAdmin configuration storage settings. */ /* User used to manipulate with storage */ $cfg['Servers'][$i]['controlhost'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['controluser'] = 'pma'; $cfg['Servers'][$i]['controlpass'] = 'password'; /* Storage database and tables */ $cfg['Servers'][$i]['pmadb'] = 'phpmyadmin'; $cfg['Servers'][$i]['bookmarktable'] = 'pma__bookmark'; $cfg['Servers'][$i]['relation'] = 'pma__relation'; $cfg['Servers'][$i]['table_info'] = 'pma__table_info'; $cfg['Servers'][$i]['table_coords'] = 'pma__table_coords'; $cfg['Servers'][$i]['pdf_pages'] = 'pma__pdf_pages'; $cfg['Servers'][$i]['column_info'] = 'pma__column_info'; $cfg['Servers'][$i]['history'] = 'pma__history'; $cfg['Servers'][$i]['table_uiprefs'] = 'pma__table_uiprefs'; $cfg['Servers'][$i]['tracking'] = 'pma__tracking'; $cfg['Servers'][$i]['designer_coords'] = 'pma__designer_coords'; $cfg['Servers'][$i]['userconfig'] = 'pma__userconfig'; $cfg['Servers'][$i]['recent'] = 'pma__recent'; /* Contrib / Swekey authentication */ // $cfg['Servers'][$i]['auth_swekey_config'] = '/etc/swekey-pma.conf'; /* * End of servers configuration */ /* * Directories for saving/loading files from server */ $cfg['UploadDir'] = ''; $cfg['SaveDir'] = ''; /** * Defines whether a user should be displayed a "show all (records)" * button in browse mode or not. * default = false */ //$cfg['ShowAll'] = true; /** * Number of rows displayed when browsing a result set. If the result * set contains more rows, "Previous" and "Next". * default = 30 */ $cfg['MaxRows'] = 50; /** * disallow editing of binary fields * valid values are: * false allow editing * 'blob' allow editing except for BLOB fields * 'noblob' disallow editing except for BLOB fields * 'all' disallow editing * default = blob */ //$cfg['ProtectBinary'] = 'false'; /** * Default language to use, if not browser-defined or user-defined * (you find all languages in the locale folder) * uncomment the desired line: * default = 'en' */ //$cfg['DefaultLang'] = 'en'; //$cfg['DefaultLang'] = 'de'; /** * default display direction (horizontal|vertical|horizontalflipped) */ //$cfg['DefaultDisplay'] = 'vertical'; /** * How many columns should be used for table display of a database? * (a value larger than 1 results in some information being hidden) * default = 1 */ //$cfg['PropertiesNumColumns'] = 2; /** * Set to true if you want DB-based query history.If false, this utilizes * JS-routines to display query history (lost by window close) * * This requires configuration storage enabled, see above. * default = false */ //$cfg['QueryHistoryDB'] = true; /** * When using DB-based query history, how many entries should be kept? * * default = 25 */ //$cfg['QueryHistoryMax'] = 100; /* * You can find more configuration options in the documentation * in the doc/ folder or at <http://docs.phpmyadmin.net/>. */ ?>

    Read the article

  • innodb memory usage mysql

    - by Tiddo
    I have a small vps, with only 256mb of ram, with maximum burst up to 512mb. When I configure my vps without innodb, it only uses 130 mb of ram, so that is no problem for me. But when I turn on innodb, The memory usage grows to about 300-400 mb. Is it possible to run innodb such that I won't exceed the 256mb? preferably I don't want to use more than 100mb for innodb. I already came across some sites which said I could limit the memory usage, but if I limit it to only 100mb will the db run well enough? (compared to for example the MyISAM storage engine) If 100mb is to little memory for innodb, can you recommend me any other storage engine which supports transactions?

    Read the article

  • Get settings through a button action

    - by Russ Knudsen
    I am looking for a way to access user settings (I assume, NSUserDefaults?) through a button action. Let me back up and explain. What I have right now are 2 TextFields a label and a button. The user will type in measurements in the 2 TextFields. When they hit the button the label displays the volume of the measured object in Gallons. That part of it works great. Then I wanted to give the user options to output the volume in Liters instead of gallons. I would also like to give the user options to type in the measurements in Centimeters. So I setup a 'Settings.Bundle' and configured it with 2 'Multi Value' cells (Measurement units and Volumetric Units). Each Multi Value cell has its own list of different units the user can pick from. My main issue is I don't know how to access these settings through the button action. I may be thinking of this wrong, but what I'm looking for is something like; Button Action If settings key = 0 Then do the math in Inches, Display in Gallons If settings key = 1 Then do the math in Centimeters, Display in Gallons If settings key = 2 Then do the math in Inches, Display in Liters If settings key = 3 Then do the math in Centimeters, Display in Liters Etc... Is this possible? Am I thinking of this in the wrong way? What's the best way to do this?

    Read the article

  • SQL server environment

    - by Olegas D
    Hello I'm considering a bit of changes in current sales environment. And trying to check all cons and pros. Current situation. SQL server (quite decent HP server - server1) + backup server (smaller Dell server - server2). all sql files and sql server itself are on the server1. If something goes wrong with server1 I will have to manually move to server2. Connecting to the sql server: 1 HQ (where server located) + 4 sites through VPN. Now I'm considering 2 scenarios: Buy some storage system + update existing servers (add ram, upgrade processors) and go for VMWare ESXI. Rent a server at a datacenter + rent virtual server in case real server goes down. Also rent some space at data storage to keep SQL files there. Have anyone considered these things and maybe found some good pros/cons list? ;) Thanks

    Read the article

  • Loading child entities with JPA on Google App Engine

    - by Phil H
    I am not able to get child entities to load once they are persisted on Google App Engine. I am certain that they are saving because I can see them in the datastore. For example if I have the following two entities. public class Parent implements Serializable{ @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Extension(vendorName="datanucleus", key="gae.encoded-pk", value="true") private String key; @OneToMany(cascade=CascadeType.ALL) private List<Child> children = new ArrayList<Child>(); //getters and setters } public class Child implements Serializable{ @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Extension(vendorName="datanucleus", key="gae.encoded-pk", value="true") private String key; private String name; @ManyToOne private Parent parent; //getters and setters } I can save the parent and a child just fine using the following: Parent parent = new Parent(); Child child = new Child(); child.setName("Child Object"); parent.getChildren().add(child); em.persist(parent); However when I try to load the parent and then try to access the children (I know GAE lazy loads) I do not get the child records. //parent already successfully loaded parent.getChildren.size(); // this returns 0 I've looked at tutorial after tutorial and nothing has worked so far. I'm using version 1.3.3.1 of the SDK. I've seen the problem mentioned on various blogs and even the App Engine forums but the answer is always JDO related. Am I doing something wrong or has anyone else had this problem and solved it for JPA?

    Read the article

  • Building a system that allows users to see a video only once

    - by Bart van Heukelom
    My client wants to distribute a video to some people, specifically car dealers, but he doesn't want the video to end up on Youtube or something like that. Therefore he wants the recipients of the video to be able to see it only once. My idea to implement this is: Generate a unique key per viewer Send each viewer a link to a page with a Flash based video player, with their key in the URL Have Flash get the video from the server. On the server the key is checked and the file sent (using php's readfile or something equivalent). Then the key is invalidated. I was thinking this wouldn't take more than a day to build. I know that if you want somebody to be able to play something, you implicitly give them the power to record it as well, but the client just wants me to make it as hard as possible. Do you think this is secure enough for the intended audience? Anything else I can do to add some security without exceeding the development time of 1 day? I'm also interested in ready made solutions, if they exist.

    Read the article

  • Conditional insert as a single database transaction in HSQLDB 1.8?

    - by Kevin Pauli
    I'm using a particular database table like a "Set" data structure, i.e., you can attempt to insert the same row several times, but it will only contain one instance. The primary key is a natural key. For example, I want the following series of operations to work fine, and result in only one row for Oklahoma: insert into states_visited (state_name) values ('Oklahoma'); insert into states_visited (state_name) values ('Texas'); insert into states_visited (state_name) values ('Oklahoma'); I am of course getting an error due to the duplicate primary key on subsequent inserts of the same value. Is there a way to make the insert conditional, so that these errors are not thrown? I.e. only do the the insert if the natural key does not already exist? I know I could do a where clause and a subquery to test for the row's existence first, but it seems that would be expensive. That's 2 physical operations for one logical "conditional insert" operation. Anything like this in SQL? FYI I am using HSQLDB 1.8

    Read the article

  • getting an exception when refreshing the configuration in memory on change to external config file

    - by RKP
    Hi, I have a windows service which reads the config settings from an external file which is located at a different path than the path to the executable for the windows service. the windows service uses a FileSystemWatcher to monitor the changes to the external config file and when it the config file is changed, it should refresh the settings in memory by reading the updated settings from the config file. but this is where I am getting an exception "ConfigurationErrorsException" and the message is "An error occurred creating the configuration section handler for appSettings: The process cannot access the file 'M:\somefolder\WindowsService1.Config' because it is being used by another process." and the inner exception is actually "IOException" with same message. here is the code. I am not sure what is wrong with the code. Please help. protected void watcher_Changed(object sender, FileSystemEventArgs e) { ConfigurationManager.RefreshSection(ConfigSectionName); WriteToEventLog(ConfigKeyCheck); if (FileChanged != null) FileChanged(this, EventArgs.Empty); } private void WriteToEventLog(string key) { if (EventLog.SourceExists(ServiceEventSource)) { EventLog.WriteEntry(ServiceEventSource, string.Format("key:{0}, value:{1}", key, ConfigurationManager.AppSettings[key])); } }

    Read the article

  • Git repos over multiple machines - backups and keeping in sync

    - by a-or-b
    I'm new to git so please feel free to RTFM me... I have multiple development sites (none of which can communicate via a network with each other) and am working on a few projects (with a few people) at any one time. What I would ideally have is at each site a centralized repository that can be pulled from but development would occur in our own (personal) repos. Then I would like to be able to sync across the centralized repos (via USB key for example). I want a centralized repo at each location as (1) I'm new to git and do break my (personal) local repo by playing around and (2) some projects get put on hold so I want to be able to free up disk space by deleting them. This is the "backup" part of my question. I was also hoping to be able to use 'git clone --bare' for my centralized repos (and the USB key repos to?) as we don't need the full checkout, just the git benefits. However I can't seem to get a bare repo to work as repo I can push from. I've used 'git remote' to set up an remote origin (similar to http://toolmantim.com/thoughts/setting_up_a_new_remote_git_repository) but I can't get 'git push' to work - it seems I need a checked-out repo. . Does anyone else use this sort of repo/development structure or is there something fundamental about git usage that I'm missing? . A solution that I thought about that might not work - If I had a 'git clone --bare' at each site and then use a git repo on my removable media which has remotes set up for each site then I could ('pull') sync my USB key with each repo. But then can I update the site repo from my USB key? Could I push from USB?

    Read the article

  • Partition falsly recognized as RAW

    - by Paul Hiemstra
    On my 2 TB data disk I have two primary partitions, one of 1.6 TB for data storage in Linux (ext3) and one of 300 GB for some additional data storage for Windows. I run a dual-boot Windows 7/Ubuntu 12.04 install. The issue I have that if I start my computer into Windows 7, bot the partitions on my 2TB data drive are not recognized. In stead, Windows 7 sees one 1TB partition with type RAW. However, if I reboot to Linux, and then back to Windows 7, the partitions are correctly recognized. The following two screenshots illustrate my situation. Before I reboot to linux: and after the reboot: I have two questions: What could cause this behavior? How can I solve this issue.

    Read the article

  • Release edit in QTableWidget Cell

    - by Schomin
    Basically I am trying to give the enter key the same functionality as the return key has when editing a cell in a qtablewidget. If editing a cell and enter is pressed I want it to jump out of editing that cell just like return does. It feels like I've literally tried everything. I've even tried passing a return press event to qcoreapplication. It seems like if your editing a cell and you press a key to trigger an action it wont happen. That seems to be what the problem is and I'm not sure how to get around that. I've been setting up all of my keyboard shortcuts for this program as actions because it seems easier to set up. Is there another way to do this that would allow the key event to happen when editing a cell? Can anyone help out with this? Thank you in advance. Tried this. It didn't work for me. http://stackoverflow.com/questions/518447/how-can-i-tell-a-qtablewidget-to-end-editing-a-cell

    Read the article

  • Python: why does this code take forever (infinite loop?)

    - by Rosarch
    I'm developing an app in Google App Engine. One of my methods is taking never completing, which makes me think it's caught in an infinite loop. I've stared at it, but can't figure it out. Disclaimer: I'm using http://code.google.com/p/gaeunitlink text to run my tests. Perhaps it's acting oddly? This is the problematic function: def _traverseForwards(course, c_levels): ''' Looks forwards in the dependency graph ''' result = {'nodes': [], 'arcs': []} if c_levels == 0: return result model_arc_tails_with_course = set(_getListArcTailsWithCourse(course)) q_arc_heads = DependencyArcHead.all() for model_arc_head in q_arc_heads: for model_arc_tail in model_arc_tails_with_course: if model_arc_tail.key() in model_arc_head.tails: result['nodes'].append(model_arc_head.sink) result['arcs'].append(_makeArc(course, model_arc_head.sink)) # rec_result = _traverseForwards(model_arc_head.sink, c_levels - 1) # _extendResult(result, rec_result) return result Originally, I thought it might be a recursion error, but I commented out the recursion and the problem persists. If this function is called with c_levels = 0, it runs fine. The models it references: class Course(db.Model): dept_code = db.StringProperty() number = db.IntegerProperty() title = db.StringProperty() raw_pre_reqs = db.StringProperty(multiline=True) original_description = db.StringProperty() def getPreReqs(self): return pickle.loads(str(self.raw_pre_reqs)) def __repr__(self): return "%s %s: %s" % (self.dept_code, self.number, self.title) class DependencyArcTail(db.Model): ''' A list of courses that is a pre-req for something else ''' courses = db.ListProperty(db.Key) def equals(self, arcTail): for this_course in self.courses: if not (this_course in arcTail.courses): return False for other_course in arcTail.courses: if not (other_course in self.courses): return False return True class DependencyArcHead(db.Model): ''' Maintains a course, and a list of tails with that course as their sink ''' sink = db.ReferenceProperty() tails = db.ListProperty(db.Key) Utility functions it references: def _makeArc(source, sink): return {'source': source, 'sink': sink} def _getListArcTailsWithCourse(course): ''' returns a LIST, not SET there may be duplicate entries ''' q_arc_heads = DependencyArcHead.all() result = [] for arc_head in q_arc_heads: for key_arc_tail in arc_head.tails: model_arc_tail = db.get(key_arc_tail) if course.key() in model_arc_tail.courses: result.append(model_arc_tail) return result Am I missing something pretty obvious here, or is GAEUnit acting up?

    Read the article

  • read contents of a xml file into a data grid view

    - by syedsaleemss
    Im using c# .net , windows form application. I have a XML file which contains two columns and some rows of data. now i have to fill this data into a data grid view. im using a button, when i click on the button an open dialog box will appear. i have to select the xml file name and when i click on open the contents of that xml file should come to the data grid view. i have tried with the following code: { XmlDataDocument xmlDatadoc=new XmlDataDocument(); XmlDatadoc.Dataset.ReadXml(filename); ds=xmlDatadoc.Dataset; datagridview1.DataSource=ds.DefaultViewManager; datagridview1.Datamember="language"; } My xml file is: <languages> <language> <key> key1</key> <value>value1</value> <language> <language> <key> key2</key> <value>value2</value> <language> </languages> Its working fine but only for "language" . I need it to work file other xml files also.

    Read the article

  • ZFS Configuration advice

    - by rbarrette
    I need some advice on configuring ZFS. Here is what I have: Physical Disks: 4x 3 TB 2x 2 TB 2x 1 TB What is the best configuration for my Vdevs and storage pool. I want to maximaze space but still maintain redundancy. Should I just get 2 more 3TB's and just create 2x 3-3TB raid2z storage pools? Create a 1x 4-3TB raidz2 vdev? Can I put redundancy at the pool level and create individual vdevs for each drive and then add 2x 1TB+2TB striped vdevs to keep all vdevs the same size. Keep in mind I do need to migrate data from the smaller drives and am planning on adding more 3tb drives later on. What do you think?

    Read the article

  • Loop to check all 14 days in the pay period

    - by Rachel Ann Arndt
    Name: Calc_Anniversary Input: Pay_Date, Hire_Date, Termination_Date Output: "Y" if is the anniversary of the employee's Hire_Date, "N" if it is not, and "T" if he has been terminated before his anniversary. Description: Create local variables to hold the month and day of the employee's Date_of_Hire, Termination_Date, and of the processing date using the TO_CHAR function. First check to see if he was terminated before his anniversary. The anniversary could be on any day during the pay period, so there will be a loop to check all 14 days in the pay period to see if one was his anniversary. CREATE OR replace FUNCTION Calc_anniversary( incoming_anniversary_date IN VARCHAR2) RETURN BOOLEAN IS hiredate VARCHAR2(20); terminationdate VARCHAR(20); employeeid VARCHAR2(38); paydate NUMBER := 0; BEGIN SELECT Count(arndt_raw_time_sheet_data.pay_date) INTO paydate FROM arndt_raw_time_sheet_data WHERE paydate = incoming_anniversary_date; WHILE paydate <= 14 LOOP SELECT To_char(employee_id, '999'), To_char(hire_date, 'DD-MON'), To_char(termination_date, 'DD-MON') INTO employeeid, hiredate, terminationdate FROM employees, time_sheet WHERE employees.employee_id = time_sheet.employee_id AND paydate = pay_date; IF terminationdate > hiredate THEN RETURN 'T'; ELSE IF To_char(SYSDATE, 'DD-MON') = To_char(hiredate, 'DD-MON')THEN RETURN 'Y'; ELSE RETURN 'N'; END IF; END IF; paydate := paydate + 1; END LOOP; END; Tables I am using CREATE TABLE Employees ( EMPLOYEE_ID INTEGER, FIRST_NAME VARCHAR2(15), LAST_NAME VARCHAR2(25), ADDRESS_LINE_ONE VARCHAR2(35), ADDRESS_LINE_TWO VARCHAR2(35), CITY VARCHAR2(28), STATE CHAR(2), ZIP_CODE CHAR(10), COUNTY VARCHAR2(10), EMAIL VARCHAR2(16), PHONE_NUMBER VARCHAR2(12), SOCIAL_SECURITY_NUMBER VARCHAR2(11), HIRE_DATE DATE, TERMINATION_DATE DATE, DATE_OF_BIRTH DATE, SPOUSE_ID INTEGER, MARITAL_STATUS CHAR(1), ALLOWANCES INTEGER, PERSONAL_TIME_OFF FLOAT, CONSTRAINT pk_employee_id PRIMARY KEY (EMPLOYEE_ID), CONSTRAINT fk_spouse_id FOREIGN KEY (SPOUSE_ID) REFERENCES EMPLOYEES (EMPLOYEE_ID)) / CREATE TABLE Arndt_Raw_Time_Sheet_data ( EMPLOYEE_ID INTEGER, PAY_DATE DATE, HOURS_WORKED FLOAT, SALES_AMOUNT FLOAT, CONSTRAINT pk_employee_id_pay_date_time PRIMARY KEY (EMPLOYEE_ID, PAY_DATE), CONSTRAINT fk_employee_id_time FOREIGN KEY (EMPLOYEE_ID) REFERENCES EMPLOYEES (EMployee_ID)); error FUNCTION Calc_Anniversary compiled Warning: execution completed with warning

    Read the article

  • Where should global Application Settings be stored on Windows 7?

    - by Kerido
    Hi everybody, I'm working hard on making my product work seamlessly on Windows 7. The problem is that there is a small set of global (not user-specific) application settings that all users should be able to change. On previous versions I used HKLM\Software\__Company__\__Product__ for that purpose. This allowed Power Users and Administrators to modify the Registry Key and everything worked correctly. Now that Windows Vista and Windows 7 have this UAC feature, by default, even an Administrator cannot access the Key for writing without elevation. A stupid solution would, of course, mean adding requireAdministrator option into the application manifest. But this is really unprofessional since the product itself is extremely far from administration-related tasks. So I need to stay with asInvoker. Another solution could mean programmatic elevation during moments when write access to the Registry Key is required. Let alone the fact that I don't know how to implement that, it's pretty awkward also. It interferes with normal user experience so much that I would hardly consider it an option. What I know should be relatively easy to accomplish is adding write access to the specified Registry Key during installation. I created a separate question for that. This also very similar to accessing a shared file for storing the settings. My feeling is that there must be a way to accomplish what I need, in a way that is secure, straightforward and compatible with all OS'es. Any ideas?

    Read the article

  • 5 x 3GB drives and 4 x 1500GB drive best raid setup?

    - by Zen_Silence
    Hello, I am building a file server my plan is the have the Operating system on one raid partition and the data storage on another partition. I currently have 5 x 3GB IDE drives that i would like to put the operating system on theses drives are old but that doesnt matter to me at the moment i have a ton of them so for this raid partition i would probably want to be able to pull out dead a drive and rebuild the array. My file partition is going to consist of 4 x 1.5TB SATA drives I would like the maximum storage with some redundancy. Any suggestions to which Raid level i should use would be greatly appreciated and if you could also suggest a PCI or PCI-e raid controller to handle theses arrays. Thanks in Advance, Zen_Silence

    Read the article

  • NHibernate many-to-many mapping

    - by Scozzard
    Hi, I am having an issue with many-to-many mapping using NHibernate. Basically I have 2 classes in my object model (Scenario and Skill) mapping to three tables in my database (Scenario, Skill and ScenarioSkill). The ScenarioSkills table just holds the IDs of the SKill and Scenario table (SkillID, ScenarioID). In the object model a Scenario has a couple of general properties and a list of associated skills (IList) that is obtained from the ScenarioSkills table. There is no associated IList of Scenarios for the Skill object. The mapping from Scenario and Skill to ScenarioSkill is a many-to-many relationship: Scenario * --- * ScenarioSkill * --- * Skill I have mapped out the lists as bags as I believe this is the best option to use from what I have read. The mappings are as follows: Within the Scenario class <bag name="Skills" table="ScenarioSkills"> <key column="ScenarioID" foreign-key="FK_ScenarioSkill_ScenarioID"/> <many-to-many class="Domain.Skill, Domain" column="SkillID" /> </bag> And within the Skill class <bag name="Scenarios" table="ScenarioSkills" inverse="true" access="noop" cascade="all"> <key column="SkillID" foreign-key="FK_ScenarioSkill_SkillID" /> <many-to-many class="Domain.Scenario, Domain" column="ScenarioID" /> </bag> Everything works fine, except when I try to delete a skill, it cannot do so as there is a reference constraint on the SkillID column of the ScenarioSkill table. Can anyone help me? I am using NHibernate 2 on an C# asp.net 3.5 web application solution.

    Read the article

  • Can't store UTF-8 in RDS despite setting up new Parameter Group using Rails on Heroku

    - by Lail
    I'm setting up a new instance of a Rails(2.3.5) app on Heroku using Amazon RDS as the database. I'd like to use UTF-8 for everything. Since RDS isn't UTF-8 by default, I set up a new Parameter Group and switched the database to use that one, basically per this. Seems to have worked: SHOW VARIABLES LIKE '%character%'; character_set_client utf8 character_set_connection utf8 character_set_database utf8 character_set_filesystem binary character_set_results utf8 character_set_server utf8 character_set_system utf8 character_sets_dir /rdsdbbin/mysql-5.1.50.R3/share/mysql/charsets/ Furthermore, I've successfully setup Heroku to use the RDS database. After rake db:migrate, everything looks good: CREATE TABLE `comments` ( `id` int(11) NOT NULL AUTO_INCREMENT, `commentable_id` int(11) DEFAULT NULL, `parent_id` int(11) DEFAULT NULL, `content` text COLLATE utf8_unicode_ci, `child_count` int(11) DEFAULT '0', `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`), KEY `commentable_id` (`commentable_id`), KEY `index_comments_on_community_id` (`community_id`), KEY `parent_id` (`parent_id`) ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; In the markup, I've included: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Also, I've set: production: encoding: utf8 collation: utf8_general_ci ...in the database.yml, though I'm not very confident that anything is being done to honor any of those settings in this case, as Heroku seems to be doing its own config when connecting to RDS. Now, I enter a comment through the form in the app: "Úbe® ƒåiL", but in the database I've got "Úbe® Æ’Ã¥iL" It looks fine when Rails loads it back out of the database and it is rendered to the page, so whatever it is doing one way, it's undoing the other way. If I look at the RDS database in Sequel Pro, it looks fine if I set the encoding to "UTF-8 Unicode via Latin 1". So it seems Latin-1 is sneaking in there somewhere. Somebody must have done this before, right? What am I missing?

    Read the article

  • Refining Search Results [PHP/MySQL]

    - by Dae
    I'm creating a set of search panes that allow users to tweak their results set after submitting a query. We pull commonly occurring values in certain fields from the results and display them in order of their popularity - you've all seen this sort of thing on eBay. So, if a lot of rows in our results were created in 2009, we'll be able to click "2009" and see only rows created in that year. What in your opinion is the most efficient way of applying these filters? My working solution was to discard entries from the results that didn't match the extra arguments, like: while($row = mysql_fetch_assoc($query)) { foreach($_GET as $key => $val) { if($val !== $row[$key]) { continue 2; } } // Output... } This method should hopefully only query the database once in effect, as adding filters doesn't change the query - MySQL can cache and reuse one data set. On the downside it makes pagination a bit of a headache. The obvious alternative would be to build any additional criteria into the initial query, something like: $sql = "SELECT * FROM tbl MATCH (title, description) AGAINST ('$search_term')"; foreach($_GET as $key => $var) { $sql .= " AND ".$key." = ".$var; } Are there good reasons to do this instead? Or are there better options altogether? Maybe a temporary table? Any thoughts much appreciated!

    Read the article

  • Codeigniter/mySQL question. Checking if several inserts are possible before actual insertion.

    - by Ethan
    Hey SO, So I'm trying to perform a couple inserts at once that are kind of co-dependent on each other. Let's say I'm doing a dog rating website. Anyone can add a dog to my database, but in doing so they also need to add a preliminary rating of the dog. Other people can then rate the dog afterwards. The dogs to ratings is a many to one relationship: a dog has many ratings. This means for my preliminary add, since I have the person both rate and add the dog, I need to take the rating, and set a foreign key to the dog's primary key. As far as I know, this means I have to actually add the dog, then check what that new addition's primary key is, and then put that in my rating before insertion. Now suppose something were to go wrong with the rating's insert, be it a string that's too long, or something that I've overlooked somehow. If the rating's failed, the dog has already been inserted, but the rating has not. In this case, I'd like the dog to not have been added in the first place. Does this mean I have to write code that says "if the rating fails, do a remove for the dog," or is there a way to predict what the key will be for the dog should everything go as planned. Is there a way to say "hold that spot," and then if everything works, add it? Any help would be greatly appreciated. Thanks!!

    Read the article

  • centos 6 nfs: logs not showing anywhere

    - by ancillary
    Can someone please tell me where NFS logs in centos 6? Or perhaps where I can tell NFS to send logs? At the present time, there appears to be no such setting. Trying to get the thing to work without logs is quite frustrating. [root@houston netshare]# locate nfs| grep log [root@houston netshare]# [root@houston netshare]# grep -Rni "nfs" /var/log /var/log/anaconda.storage.log:23:20:41:33,962 DEBUG : registered device format class NFS as nfs /var/log/anaconda.storage.log:24:20:41:33,962 DEBUG : registered device format class NFSv4 as nfs4 This is a day-old centos 6 install from livecd and yum update has been run.

    Read the article

  • application specific seed data population

    - by user339108
    Env: JBoss, (h2, MySQl, postgres), JPA, Hibernate 3.3.x @Id @GeneratedValue(strategy = IDENTITY) private Integer key; Currently our primary keys are created using the above annotation. We expect to support a large number of users (~million users), what key should be used. Should it be Integer or Long or should I use the unsigned versions of the above declarations. We have a j2ee application which needs to be populated with some seed data on installation. On purchase, the customer creates his own data on top of the application. We just want to make sure that there is enough room to ship, modify or add data for future releases. What would be the best mechanism to support this, we had looked at starting all table identifiers from a certain id (say 1000) but this mandates modifying primary key generation to have table or sequence based generators and we have around ~100 tables. We are not sure if this is the right strategy for this. If we use a signed integer approach for the key, would it make sense to have the seed data as everything starting from 0 and below (i.e -ve numbers), so that all customer specific data will be available on 0 and above (i.e. +ve numbers)

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >