Search Results

Search found 1840 results on 74 pages for 'kevin jim'.

Page 12/74 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • How do I set Ctrl+Alt+something for Audacious global hokeys?

    - by Jim
    I want to set up global hotkeys in Audacious as follows. When I try and set these up it doesn't allow two mask keys. Ctrl+Alt+Insert for play. Ctrl+Alt+Home for pause. Ctrl+Alt+End for stop. Ctrl+Alt+Page Up for previous track. Ctrl+Alt+Page Down for next track. Ctrl+Alt+Up Arrow for volume up. Ctrl+Alt+Down for volume down. Ctrl+Alt+Left for skip back. Ctrl+Alt+Right for skip forward.

    Read the article

  • Ubuntuone Preferences does not show usage, name, e-mail, current plan

    - by Jim
    Ubuntuone is correctly synchronizing selected files between two computers running Ubuntu 10.10. When I open Ubuntuone Preferences, Account tab, on one computer it does not display the usage, name e-mail or current plan. On the other computer all information is shown correctly. On the Devices tab the 2 computers are not shown. They do show correctly on the other computer. Any ideas on how to fix this problem. I have reinstalled Ubuntuone per this link: https://wiki.ubuntu.com/UbuntuOne/FAQ/HowDoICompletelyRemoveAndReinstallUbuntuOne

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • Is it a good programming practice to have a class with several .h files?

    - by Jim Thio
    I suppose the class have several different interfaces. Some it shows to some class, some it shows to other classes. Are there any good reason for that? One thing I can think of is with one .h per class, interface would either be public or private. What about if I want some interface to be available to some friends' class and some interface to be truly public? Sample: @interface listNewController:BadgerStandardViewViewController <UITableViewDelegate,UITableViewDataSource,UITextFieldDelegate,NSFetchedResultsControllerDelegate,UIScrollViewDelegate,UIGestureRecognizerDelegate> { } @property (nonatomic) IBOutlet NSFetchedResultsController *FetchController; @property (nonatomic) IBOutlet UITextField *searchBar1; @property (nonatomic) IBOutlet UITableView *tableViewA; + (listNewController *) singleton; //For Easier Access -(void)collapseAll; -(void)TitleViewClicked:(TitleView *) theTitleView; -(NSUInteger) countOfEachSection:(NSInteger)section; @end Many of those public properties and function are only ever called by just one other classes. I wonder why I need to make them available to many classes. It's in Objective-c by the way

    Read the article

  • Nvidia driver installation results in resolution change to 800x600 with Xubuntu 10.04 (can't change)

    - by Jim Michael
    I have a 2.8 P4 and a Nvidia FX 5500 AGP graphics card. I've installed Xubuntu 10.04. It is WAY too laggy with the default o/s driver. Installing the Nvidia 173 driver, modaliases and nvidia-settings packages via synaptic package manager results in the following error message: An error occurred, please run Package Manager from the right click menu or apt-get in a terminal to see what is wrong. Error: Opening the cache (E::read, still have 11898251 to read but none left, E: The package lists or status file could not be parsed or opened.) This usually means that your installed packages have unmet dependencies. When restarting the PC the resolution drops to 800x600 (from the monitors native 1440x900). Nvidia settings cannot be changed either from the Xfce menu or Nvidia Xserver. Nvidia Xsever gives the following error message: You do not appear to be using the Nvidia X driver. Please edit your x configuration file (just run 'nvidiaxconfig' as root ) and restart X server. Also, I can't find anything in any directory called xorg.

    Read the article

  • Can anyone point me to some open source directX rendering engines or frameworks? [on hold]

    - by Jim
    I'm completely new to graphics API programmming, but not at all new to the theory and principle operation of game engines and rendering engines. That being said, I want to do some experiments of rendering very dense geometry scenes in a basic rendering engine or game engine. I don't need a lot of bells and whistles. What I need is enough control that I can implement my own scene graph algorithms and control the rendering pipeline very specifically. My ideal candidate engine would be either a rendering engine or game engine with a modular design that might be ready to go out of the box but would be simple enough in case I need to rip out some of the guts in the rendering management and implement my own. It's a tough call because I'm right at the level where it's almost better to go from scratch, but there's no sense in having to build every single basic thing such as heirarchical transforms, etc. I just want to work with rendering optimization to push dense geometry for maximum FPS. Does anyone have a suggestion for an engine or basic framework to use? I requested DirectX in my title because I figured it would likely be better supported and less likely for me to run into some obscure less-documented problem. But OpenGL might be acceptable if the recommended framework was definitely better than my other options. EDIT: I should add that I really want GPU tessellation support (part of adding to the density of geometry detail).

    Read the article

  • Using Drupal to build a directory listing? [on hold]

    - by Jim
    I am trying to create a form inside Drupal that will allow me to create a directory similar to this diagram: http://i.imgur.com/EtChBbG.jpg I tried looking into HTML tables but it is too basic for what I'm trying to do. How do I create a directory that can archive data in an alphabetical ordering? It also has to be able to sort by letters and other categories. Does anyone have an idea of how I should go about doing this? Thanks!

    Read the article

  • usb keyboard stopped working in terminal window after update from 13.04 to 13.10

    - by Jim
    When I start the computer, I am logged in automatically. Using the keyboard, I type Ctrl+Alt+t, which brings up a terminal window, just like it should; however, nothing happens when I attempt to type into the terminal. If I change over to the guest account, I can type into the terminal, but (different problem here, I'll ask it separately unless someone is kind enough to answer it here) my password doesn't work for sudo or anything else. Elsewhere I read that Language support could fix it, but that won't accept my password either - and no, I haven't changed it lately. Also, my Plex Media Server seems to have disappeared. I'll re-install everything if necessary, but I sure would rather avoid that if possible

    Read the article

  • Data recovery on a Samsung SV1203N HDD

    - by Jim Conace
    Let me start by saying that I am a beginner in Ubuntu, but pretty knowledgeable with Windows. I am having a strange problem trying to recover data from a Samsung SV1203N HDD (120 GiB). I have tried numerous things in both Windows 7 and Ubuntu 12.04 LTS (on a flash drive, an external HDD and a Live CD). This drive has some very important data on it and I am praying some of you Ubuntu geeks can help me get it back. Here's my problem. The HDD is clicking while I am booting, but it stops when I get into Ubuntu or Windows. It refuses to be detected in the bios, so I cant perform any tests on it. I have tried numerous things in both Windows (repair CD, Jumpers, etc.) and Ubuntu (Boot-Fix, GParted, Testdisk, Photorec, forcing a mount, etc.). But it all seems to lead me back to the fact that the drive is not being recognized in the BIOS. I've even tried chilling the drive in the fridge, which worked well for another drive I work on, and I recovered all of the data in Ubuntu flawlessly. I am assuming that since the drive stops clicking when I get into the OS' that there is hope for recovery. I am going to try an IDE/SATA to USB cable, and replacing the Logic Board, but I want to exhaust all other possibilities before I do that. Any help would be greatly appreciated Bye

    Read the article

  • In C++ Good reasons for NOT using symmetrical memory management (i.e. new and delete)

    - by Jim G
    I try to learn C++ and programming in general. Currently I am studying open source with help of UML. Learning is my hobby and great one too. My understanding of memory allocation in C++ is that it should be symmetrical. A class is responsible for its resources. If memory is allocated using new it should be returned using delete in the same class. It is like in a library you, the class, are responsibility for the books you have borrowed and you return them then you are done. This, in my mind, makes sense. It makes memory management more manageable so to speak. So far so good. The problem is that this is not how it works in the real world. In Qt for instance, you create QtObjects with new and then hand over the ownership of the object to Qt. In other words you create QtObjects and Qt destroys them for you. Thus unsymmetrical memory management. Obviously the people behind Qt must have a good reason for doing this. It must be beneficial in some kind of way, My questions is: What is the problem with Bjarne Stroustrups idea about a symmetrical memory management contained within a class? What do you gain by splitting new and delete so you create an object and destroy it in different classes like you do in Qt. Is it common to split new and delete and why in such case, in other projects not involving Qt? Thanks for any help shedding light on this mystery!

    Read the article

  • Grow Your Business with Security

    - by Darin Pendergraft
    Author: Kevin Moulton Kevin Moulton has been in the security space for more than 25 years, and with Oracle for 7 years. He manages the East EnterpriseSecurity Sales Consulting Team. He is also a Distinguished Toastmaster. Follow Kevin on Twitter at twitter.com/kevin_moulton, where he sometimes tweets about security, but might also tweet about running, beer, food, baseball, football, good books, or whatever else grabs his attention. Kevin will be a regular contributor to this blog so stay tuned for more posts from him. It happened again! There I was, reading something interesting online, and realizing that a friend might find it interesting too. I clicked on the little email link, thinking that I could easily forward this to my friend, but no! Instead, a new screen popped up where I was asked to create an account. I was expected to create a User ID and password, not to mention providing some personally identifiable information, just for the privilege of helping that website spread their word. Of course, I didn’t want to have to remember a new account and password, I didn’t want to provide the requisite information, and I didn’t want to waste my time. I gave up, closed the web page, and moved on to something else. I was left with a bad taste in my mouth, and my friend might never find her way to this interesting website. If you were this content provider, would this be the outcome you were looking for? A few days later, I had a similar experience, but this one went a little differently. I was surfing the web, when I happened upon some little chotcke that I just had to have. I added it to my cart. When I went to buy the item, I was again brought to a page to create account. Groan! But wait! On this page, I also had the option to sign in with my OpenID account, my Facebook account, my Yahoo account, or my Google Account. I have all of those! No new account to create, no new password to remember, and no personally identifiable information to be given to someone else (I’ve already given it all to those other guys, after all). In this case, the vendor was easy to deal with, and I happily completed the transaction. That pleasant experience will bring me back again. This is where security can grow your business. It’s a differentiator. You’ve got to have a presence on the web, and that presence has to take into account all the smart phones everyone’s carrying, and the tablets that took over cyber Monday this year. If you are a company that a customer can deal with securely, and do so easily, then you are a company customers will come back to again and again. I recently had a need to open a new bank account. Every bank has a web presence now, but they are certainly not all the same. I wanted one that I could deal with easily using my laptop, but I also wanted 2-factor authentication in case I had to login from a shared machine, and I wanted an app for my iPad. I found a bank with all three, and that’s who I am doing business with. Let’s say, for example, that I’m in a regular Texas Hold-em game on Friday nights, so I move a couple of hundred bucks from checking to savings on Friday afternoons. I move a similar amount each week and I do it from the same machine. The bank trusts me, and they trust my machine. Most importantly, they trust my behavior. This is adaptive authentication. There should be no reason for my bank to make this transaction difficult for me. Now let's say that I login from a Starbucks in Uzbekistan, and I transfer $2,500. What should my bank do now? Should they stop the transaction? Should they call my home number? (My former bank did exactly this once when I was taking money out of an ATM on a business trip, when I had provided my cell phone number as my primary contact. When I asked them why they called my home number rather than my cell, they told me that their “policy” is to call the home number. If I'm on the road, what exactly is the use of trying to reach me at home to verify my transaction?) But, back to Uzbekistan… Should my bank assume that I am happily at home in New Jersey, and someone is trying to hack into my account? Perhaps they think they are protecting me, but I wouldn’t be very happy if I happened to be traveling on business in Central Asia. What if my bank were to automatically analyze my behavior and calculate a risk score? Clearly, this scenario would be outside of my typical behavior, so my risk score would necessitate something more than a simple login and password. Perhaps, in this case, a one-time password to my cell phone would prove that this is not just some hacker half way around the world. But, what if you're not a bank? Do you need this level of security? If you want to be a business that is easy to deal with while also protecting your customers, then of course you do. You want your customers to trust you, but you also want them to enjoy doing business with you. Make it easy for them to do business with you, and they’ll come back, and perhaps even Tweet about it, or Like you, and then their friends will follow. How can Oracle help? Oracle has the technology and expertise to help you to grown your business with security. Oracle Adaptive Access Manager will help you to prevent fraud while making it easier for your customers to do business with you by providing the risk analysis I discussed above, step-up authentication, and much more. Oracle Mobile and Social Access Service will help you to secure mobile access to applications by expanding on your existing back-end identity management infrastructure, and allowing your customers to transact business with you using the social media accounts they already know. You also have device fingerprinting and metrics to help you to grow your business securely. Security is not just a cost anymore. It’s a way to set your business apart. With Oracle’s help, you can be the business that everyone’s tweeting about. Image courtesy of Flickr user shareski

    Read the article

  • Can't start HTTPD

    - by Kevin
    I'm trying to start httpd, but it only gives me '[FAILED]'. Now, I installed exim, but that didn't worked, so I uninstalled it. But then the web server didn't worked any more! I tried to /etc/init.d/httpd start, but that gives '[FAILED]' without a reason. What can I do to get it started? httpd -e debug gives: [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module auth_basic_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module auth_digest_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module authn_file_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module authn_alias_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module authn_anon_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module authn_dbm_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module authn_default_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module authz_host_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module authz_user_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module authz_owner_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module authz_groupfile_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module authz_dbm_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module authz_default_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module ldap_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module authnz_ldap_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module include_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module log_config_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module logio_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module env_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module ext_filter_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module mime_magic_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module expires_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module deflate_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module headers_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module usertrack_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module setenvif_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module mime_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module dav_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module status_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module autoindex_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module info_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module dav_fs_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module vhost_alias_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module negotiation_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module dir_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module actions_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module speling_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module userdir_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module alias_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module rewrite_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module proxy_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module proxy_balancer_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module proxy_ftp_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module proxy_http_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module proxy_connect_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module cache_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module suexec_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module disk_cache_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module file_cache_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module mem_cache_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module cgi_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module version_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module ssl_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module fcgid_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module perl_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module php5_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module proxy_ajp_module [Wed Nov 23 11:47:25 2011] [debug] mod_so.c(246): loaded module python_module Regards, Kevin

    Read the article

  • PHP install sqlite3 extension

    - by Kevin
    We are using PHP 5.3.6 here, but we used the --without-sqlite3 command when compiling PHP. (It stands in the 'Configure Command' column). But, it is very risky to recompile PHP on that server; there are many visitors. How can we install/use sqlite3? Regards, Kevin [EDIT] yum repolist gives: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.nl.leaseweb.net * extras: mirror.nl.leaseweb.net * updates: mirror.nl.leaseweb.net repo id repo name status base CentOS-5 - Base 3,566 extras CentOS-5 - Extras 237 updates CentOS-5 - Updates 376 repolist: 4,179 rpm -qa | grep php gives: php-pdo-5.3.6-1.w5 php-mysql-5.3.6-1.w5 psa-php5-configurator-1.5.3-cos5.build95101022.10 php-mbstring-5.3.6-1.w5 php-imap-5.3.6-1.w5 php-cli-5.3.6-1.w5 php-gd-5.3.6-1.w5 php-5.3.6-1.w5 php-common-5.3.6-1.w5 php-xml-5.3.6-1.w5 php -i | grep sqlite gives: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/sqlite3.so' - /usr/lib64/php/modules/sqlite3.so: cannot open shared object file: No such file or directory in Unknown on line 0 Configure Command => './configure' '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' '--sharedstatedir=/usr/com' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--cache-file=../config.cache' '--with-libdir=lib64' '--with-config-file-path=/etc' '--with-config-file-scan-dir=/etc/php.d' '--disable-debug' '--with-pic' '--disable-rpath' '--without-pear' '--with-bz2' '--with-exec-dir=/usr/bin' '--with-freetype-dir=/usr' '--with-png-dir=/usr' '--with-xpm-dir=/usr' '--enable-gd-native-ttf' '--without-gdbm' '--with-gettext' '--with-gmp' '--with-iconv' '--with-jpeg-dir=/usr' '--with-openssl' '--with-pcre-regex=/usr' '--with-zlib' '--with-layout=GNU' '--enable-exif' '--enable-ftp' '--enable-magic-quotes' '--enable-sockets' '--enable-sysvsem' '--enable-sysvshm' '--enable-sysvmsg' '--with-kerberos' '--enable-ucd-snmp-hack' '--enable-shmop' '--enable-calendar' '--without-mime-magic' '--without-sqlite' '--without-sqlite3' '--with-libxml-dir=/usr' '--enable-xml' '--with-system-tzdata' '--enable-force-cgi-redirect' '--enable-pcntl' '--with-imap=shared' '--with-imap-ssl' '--enable-mbstring=shared' '--enable-mbregex' '--with-gd=shared' '--enable-bcmath=shared' '--enable-dba=shared' '--with-db4=/usr' '--with-xmlrpc=shared' '--with-ldap=shared' '--with-ldap-sasl' '--with-mysql=shared,/usr' '--with-mysqli=shared,/usr/bin/mysql_config' '--enable-dom=shared' '--with-pgsql=shared' '--enable-wddx=shared' '--with-snmp=shared,/usr' '--enable-soap=shared' '--with-xsl=shared,/usr' '--enable-xmlreader=shared' '--enable-xmlwriter=shared' '--with-curl=shared,/usr' '--enable-fastcgi' '--enable-pdo=shared' '--with-pdo-odbc=shared,unixODBC,/usr' '--with-pdo-mysql=shared,/usr' '--with-pdo-pgsql=shared,/usr' '--with-pdo-sqlite=shared,/usr' '--with-pdo-dblib=shared,/usr' '--enable-json=shared' '--enable-zip=shared' '--with-readline' '--with-pspell=shared' '--enable-phar=shared' '--with-mcrypt=shared,/usr' '--with-tidy=shared,/usr' '--with-mssql=shared,/usr' '--enable-sysvmsg=shared' '--enable-sysvshm=shared' '--enable-sysvsem=shared' '--enable-posix=shared' '--with-unixODBC=shared,/usr' '--enable-fileinfo=shared' '--enable-intl=shared' '--with-icu-dir=/usr' '--with-recode=shared,/usr' /etc/php.d/pdo_sqlite.ini, /etc/php.d/sqlite3.ini, PHP Warning: Unknown: It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'Europe/Berlin' for 'CET/1.0/no DST' instead in Unknown on line 0 PDO drivers => mysql, sqlite pdo_sqlite PWD => /root/sqlite _SERVER["PWD"] => /root/sqlite _ENV["PWD"] => /root/sqlite

    Read the article

  • How clean is deleting a computer object?

    - by Kevin
    Though quite skilled at software development, I'm a novice when it comes to Active Directory. I've noticed that AD seems to have a lot of stuff buried in the directory and schema which does not appear superficially when using simplified tools such as Active Directory Users and Computers. It kind of feels like the Windows registry, where COM classes have all kinds of intertwined references, many of which are purely by GUID, such that it's not enough to just search for anything referencing "GadgetXyz" by name in order to cleanly remove GadgetXyz. This occasionally leads to the uneasy feeling that I may have useless garbage building up in there which I have no idea how to weed out. For instance, I made the mistake a while back of trying to rename a DC, figuring I could just do it in the usual manner from Control Panel. I found references to the old name buried all over the place which made it impossible to reuse that name without considerable manual cleanup. Even long after I got it all working, I've stumbled upon the old name hidden away in LDAP. (There were no other DCs left in the picture at that time so I don't think it was a tombstone issue.) More specifically, I'm worried about the case of just outright deleting a computer from AD. I understand the cleanest way to do it is to log into the computer itself and tell it to leave the domain. (As an aside, doing this in Windows 8 seems to only disable the computer object and not delete it outright!) My concern is cases where this is not possible, for instance because it was on an already-deleted VM image. I can simply go into Active Directory Users and Computers, find the computer object, click it, and press Delete, and it seems to go away. My question is, is it totally, totally gone, or could this leave hanging references in any Active Directory nook or cranny I won't know to look in? (Excluding of course the expected tombstone records which expire after a set time.) If so, is there any good way to clean up the mess? Thank you for any insight! Kevin ps., It was over a year ago so I don't remember the exact details, but here's the gist of the DC renaming issue. I started with a single 2008 DC named ABC in a physical machine and wanted to end up instead with a DC of the same name running in a vSphere VM. Not wanting to mess with imaging the physical machine, my plan instead was: Rename ABC to XYZ. Fresh install 2008 on a VM, name it ABC, and join it to the domain. (I may have done the latter in the same step as promoting to DC; I don't recall.) dcpromo the new ABC as a 2nd DC, including GC. Make sure the new ABC replicated correctly from XYZ and then transfer the FSMO roles from XYZ to it. Once everything was confirmed to work with the new ABC alone, demote XYZ, remove the AD role, and remove it from the domain. Eventually I managed to do this but it was a much bumpier ride than expected. In particular, I got errors trying to join the new ABC to the domain. These included "The pre-windows 2000 name is already in use" and "No mapping between account names and security IDs was done." I eventually found that the computer object for XYZ had attributes that still referred to it as ABC. Among these were servicePrincipalName, msDS-AdditionalDnsHostName, and msDS-AdditionalSamAccountName. The latter I could not edit via Attribute Editor and instead had to run this against XYZ: NETDOM computername <simple-name> /add:<FQDN> There were some other hitches I don't remember exactly.

    Read the article

  • log4net not logging with a mixture of .net 1.1 and .net 3.5

    - by Jim P
    Hi All, I have an iis server on a windows 2003 production machine that will not log using log4net in the .net3.5 web application. Log4net works fine in the 1.1 apps using log4net version 1.2.9.0 and but not the 3.5 web app. The logging works fine in a development and staging environment but not in production. It does not error and I receive no events logged in the event viewer and don't know where to look next. I have tried both versions of log4net (1.2.9.0 and 1.2.10.0) and both work in development and staging but not in production. For testing purposes I have created just a single page application that just echos back the time when the page is hit and also is supposed to log to my logfile using log4net. Here is my web.config file: <configSections> <!-- LOG4NET Configuration --> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler,log4net" requirePermission="false" /> </configSections> <log4net debug="true"> <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender"> <param name="File" value="D:\DIF\Logs\TestApp\TestApp_"/> <param name="AppendToFile" value="true"/> <param name="RollingStyle" value="Date"/> <param name="DatePattern" value="yyyyMMdd\.\l\o\g"/> <param name="StaticLogFileName" value="false"/> <layout type="log4net.Layout.PatternLayout"> <param name="ConversionPattern" value="%date{HH:mm:ss} %C::%M [%-5level] - %message%newline"/> </layout> </appender> <root> <level value="ALL"/> <appender-ref ref="RollingFileAppender"/> </root> </log4net> Here is my log4net initialization: // Logging for the application private static ILog mlog = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType); protected void Application_Start(object sender, EventArgs e) { try { // Start the configuration of the Logging XmlConfigurator.Configure(); mlog.Info("Started logging for the TestApp Application."); } catch (Exception ex) { throw; } } Any help would be greatly appreciated. Thanks, Jim

    Read the article

  • Problem getting correct parameters for C# P/Invoke call to C++ dll

    - by Jim Jones
    Trying to Interop a functionality from the Outside In API from Oracle. Have the following function: SCCERR EXOpenExport {VTHDOC hDoc, VTDWORD dwOutputId, VTDWORD dwSpecType, VTLPVOID pSpec, VTDWORD dwFlags, VTSYSPARAM dwReserved, VTLPVOID pCallbackFunc, VTSYSPARAM dwCallbackData, VTLPHEXPORT phExport); From the header files I reduced the parameters to: typedef VTSYSPARAM VTHDOC, VTLPHDOC * typedef DWORD_PTR VTSYSPARAM typedef unsigned long DWORD_PTR typedef unsigned long VTDWORD typedef VTVOID* VTLPVOID #define VTVOID void typedef VTHDOC VTHEXPORT, *VTLPEXPORT These are for 32 bit windows Going through the header files, the example programs, and the documentation I found: 1. That pSpec could be a pointer to a buffer or NULL, so I set it to a IntPtr.Zero (documentation). 2. That dwFlags and dwReserved according to the documentation "Must be set by the developer to 0". 3. That pCallbackFunc can be set to NULL if I don't want to handle callbacks. 4. That the last two are based on structs that I wrote C# wrappers for using the [StructLayout(LayoutKind.Sequential)]. Then instatiated an instance and generated the parameters by first creating a IntPtr with Marshal.AllocHGlobal(Marshal.SizeOf(instance)), then getting the address value which is passed as a uint for dwCallbackData and a IntPtr for phExport. The final parameter list is as follows: 1. phDoc as a IntPtr which was loaded with an address by the DAOpenDocument function called before. 2. dwOutputId as uint set to 1535 which represents FI_JPEGFIF 3. dwSpecType as int set to 2 which represents IOTYPE_ANSIPATH 4. pSpec as an IntPtr.Zero where the output will be written 5. dwFlags as uint set to 0 as directed 6. dwReserved as uint set to 0 as directed 7. pCallbackFunc as IntPtr set to NULL as I will handle results 8. dwCallBackDate as uint the address of a buffer for a struct 9. phExport as IntPtr to another struct buffer still get an undefined error from the API. Meaning that the call returns a 961 which is not defined in any of the header files. In the past I have gotten this when my choice of parameter types are incorrect. I started out using Interop Assistant which was helpful in learning how many of the parameter types get translated. It is however limited by how well I am able to glean the correct native type from the header files. For example the hDoc parameter used in the preceding function was defined as a non-filesytem handle, so attempted to use Marshal to create a handle, then used an IntPtr, and finally it turned out to be an int (actually it was &phDoc used here). So is there a more scientific way of doing this, other than trial and error? Jim

    Read the article

  • Trouble joining Windows Server 2008 to Domain

    - by Jim R
    When I try to join my new server to my existing domain I get the following error: "An attempt to resolve the DNS name of a DC in the domain being joined has failed. Please verify this client is configured to reach a DNS server that can resove DNS names in the target domain." I have tried all of the following already: Successfully pinged the domain controller. Ping the new server from the domain controller by IP address and by DNS name. Ping the DC server from the new server by IP address and by DNS name. Changed the network to DHCP (it was originally static). No joy as static or DHCP. Turned off all firewall settings. Added the domain name to 'hosts' file. Added the server name of the primary domain controller to the 'hosts' file in the new server. Any ideas? Thanks in advance for any help! Jim Update: With help from J. Brian Kelly (Thanks) I have managed to narrow down the problem to a DNS issue. Specifically, UDP/53 packets are being sent (they are seen in Network Monitor), but are not getting to the DNS server. But, I do not yet know why. Update: The quested output from IPCONFIG for the HyperV host and the virtual machine. IPCONFIG from HyperV Server Windows IP Configuration Host Name . . . . . . . . . . . . : HYPER Primary Dns Suffix . . . . . . . : sfi-wfc.com Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : sfi-wfc.com Ethernet adapter Local Area Connection 4: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Primary Network Physical Address. . . . . . . . . : 00-30-48-CA-CC-7A DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::cd16:3ac2:3d4f:e275%679(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.100.1(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.100.10 DHCPv6 IAID . . . . . . . . . . . : -1476382648 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-12-10-20-E9-00-30-48-CA-CC-7A DNS Servers . . . . . . . . . . . : 192.168.100.5 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter Local Area Connection 3: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : sfi Description . . . . . . . . . . . : Intel(R) 82576 Gigabit Dual Port Network Connection #2 Physical Address. . . . . . . . . : 00-30-48-CA-CC-7B DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPCONFIG from Virtual Machine Windows IP Configuration Host Name . . . . . . . . . . . . : DB Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : sfi Ethernet adapter Local Area Connection 2: Connection-specific DNS Suffix . : sfi Description . . . . . . . . . . . : Microsoft Virtual Machine Bus Network Adapter Physical Address. . . . . . . . . : 00-15-5D-66-03-02 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.100.128(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : Saturday, August 29, 2009 10:44:45 AM Lease Expires . . . . . . . . . . : Tuesday, September 01, 2009 3:08:33 PM Default Gateway . . . . . . . . . : 192.168.100.10 DHCP Server . . . . . . . . . . . : 192.168.100.5 DNS Servers . . . . . . . . . . . : 192.168.102.5 Primary WINS Server . . . . . . . : 192.168.100.5 NetBIOS over Tcpip. . . . . . . . : Enabled Tunnel adapter Local Area Connection* 8: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : sfi Description . . . . . . . . . . . : isatap.sfi Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Tunnel adapter Local Area Connection* 9: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Teredo Tunneling Pseudo-Interface Physical Address. . . . . . . . . : 02-00-54-55-4E-01 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes

    Read the article

  • JCP Awards 10 Year Retrospective

    - by Heather VanCura
    As we celebrate 10 years of JCP Program Award recognition in 2012,  take a look back in the Retrospective article covering the history of the JCP awards.  Most recently, the JCP awards were  celebrated at JavaOne Latin America in Brazil, where SouJava was presented the JCP Member of the Year Award for 2012 (won jointly with the London Java Community) for their contributions and launch of the Global Adopt-a-JSR Program. This is also a good time to honor the JCP Award Nominees and Winners who have been designated as Star Spec Leads.  Spec Leads are key to the Java Community Process (JCP) program. Without them, none of the Java Specification Requests (JSRs) would have begun, much less completed and become implemented in shipping products.  Nominations for 2012 Start Spec Leads are now open until 31 December. The Star Spec Lead program recognizes Spec Leads who have repeatedly proven their merit by producing high quality specifications, establishing best practices, and mentoring others. The point of such honor is to endorse the good work that they do, showcase their methods for other Spec Leads to emulate, and motivate other JCP program members and participants to get involved in the JCP program. Ed Burns – A Star Spec Lead for 2009, Ed first got involved with the JCP program when he became co-Spec Lead of JSR 127, JavaServer Faces (JSF), a role he has continued through JSF 1.2 and now JSF 2.0, which is JSR 314. Linda DeMichiel – Linda thus involved in the JCP program from its very early days. She has been the Spec Lead on at least three JSRs and an EC member for another three. She holds a Ph.D. in Computer Science from Stanford University. Gavin King – Nominated as a JCP Outstanding Spec Lead for 2010, for his work with JSR 299. His endorsement said, “He was not only able to work through disputes and objections to the evolving programming model, but he resolved them into solutions that were more technically sound, and which gained support of its pundits.” Mike Milikich –  Nominated for his work on Java Micro Edition (ME) standards, implementations, tools, and Technology Compatibility Kits (TCKs), Mike was a 2009 Star Spec Lead for JSR 271, Mobile Information Device Profile 3. David Nuescheler – Serving as the CTO for Day Software, acquired by Adobe Systems, David has been a key player in the growth of the company’s global content management solution. In 2002, he became Spec Lead for JSR 170, Content Repository for Java Technology API, continuing for the subsequent version, JSR 283. Bill Shannon – A well-respected name in the Java community, Bill came to Oracle from Sun as a Distinguished Engineer and is still performing at full speed as Spec Lead for JSR 342, Java EE 7,  as an alternate EC member, and hands-on problem solver for the Java community as a whole. Jim Van Peursem – Jim holds a PhD in Computer Engineering. He was part of the Motorola team that worked with Sun labs on the Spotless VM that became the KVM. From within Motorola, Jim has been responsible for many aspects of Java technology deployment, from an independent Connected Limited Device Configuration (CLDC) and Mobile Information Device Profile (MIDP) implementations, to handset development, to working with the industry in defining many related standards. Participation in the JCP Program goes well beyond technical proficiency. The JCP Awards Program is an attempt to say “Thank You” to all of the JCP members, Expert Group Members, Spec Leads, and EC members who give their time to contribute to the evolution of Java technology.

    Read the article

  • How should I plan the inheritance structure for my game?

    - by Eric Thoma
    I am trying to write a platform shooter in C++ with a really good class structure for robustness. The game itself is secondary; it is the learning process of writing it that is primary. I am implementing an inheritance tree for all of the objects in my game, but I find myself unsure on some decisions. One specific issue that it bugging me is this: I have an Actor that is simply defined as anything in the game world. Under Actor is Character. Both of these classes are abstract. Under Character is the Philosopher, who is the main character that the user commands. Also under Character is NPC, which uses an AI module with stock routines for friendly, enemy and (maybe) neutral alignments. So under NPC I want to have three subclasses: FriendlyNPC, EnemyNPC and NeutralNPC. These classes are not abstract, and will often be subclassed in order to make different types of NPC's, like Engineer, Scientist and the most evil Programmer. Still, if I want to implement a generic NPC named Kevin, it would nice to be able to put him in without making a new class for him. I could just instantiate a FriendlyNPC and pass some values for the AI machine and for the dialogue; that would be ideal. But what if Kevin is the one benevolent Programmer in the whole world? Now we must make a class for him (but what should it be called?). Now we have a character that should inherit from Programmer (as Kevin has all the same abilities but just uses the friendly AI functions) but also should inherit from FriendlyNPC. Programmer and FriendlyNPC branched away from each other on the inheritance tree, so inheriting from both of them would have conflicts, because some of the same functions have been implemented in different ways on the two of them. 1) Is there a better way to order these classes to avoid these conflicts? Having three subclasses; Friendly, Enemy and Neutral; from each type of NPC; Engineer, Scientist, and Programmer; would amount to a huge number of classes. I would share specific implementation details, but I am writing the game slowly, piece by piece, and so I haven't implemented past Character yet. 2) Is there a place where I can learn these programming paradigms? I am already trying to take advantage of some good design patterns, like MVC architecture and Mediator objects. The whole point of this project is to write something in good style. It is difficult to tell what should become a subclass and what should become a state (i.e. Friendly boolean v. Friendly class). Having many states slows down code with if statements and makes classes long and unwieldy. On the other hand, having a class for everything isn't practical. 3) Are there good rules of thumb or resources to learn more about this? 4) Finally, where does templating come in to this? How should I coordinate templates into my class structure? I have never actually taken advantage of templating honestly, but I hear that it increases modularity, which means good code.

    Read the article

  • How to use Sprockets Rails plugin on Heroku?

    - by Kevin
    Hi, I just deployed my Rails app to Heroku, but the Javascripts that were using Sprockets plugin don't work. I understood that, because my Heroku app is read-only, Sprockets won't work. I've found this sprockets_on_heroku plugin that should do the work, but I don't really get how to use it : I added config.gem sprockets in config/environment.rb I added sprockets in my .gems file I pushed these on Heroku and Sprockets was successfully installed I locally ran script/plugin install git://github.com/jeffrydegrande/sprockets_on_heroku.git and the plugin was successfully installed Nothing changed on Heroku, so I tried to install the plugin on Heroku with heroku plugins:install git://github.com/jeffrydegrande/sprockets_on_heroku.git, which returned sprockets_on_heroku installedbut then, a heroku restartor a heroku pluginscommand would return this: ~/.heroku/plugins/sprockets_on_heroku/init.rb:1: uninitialized constant ActionController (NameError) from /opt/local/lib/ruby/gems/1.8/gems/heroku-1.8.3/bin/../lib/heroku/plugin.rb:25:in `load' from /opt/local/lib/ruby/gems/1.8/gems/heroku-1.8.3/bin/../lib/heroku/plugin.rb:25:in `load!' from /opt/local/lib/ruby/gems/1.8/gems/heroku-1.8.3/bin/../lib/heroku/plugin.rb:22:in `each' from /opt/local/lib/ruby/gems/1.8/gems/heroku-1.8.3/bin/../lib/heroku/plugin.rb:22:in `load!' from /opt/local/lib/ruby/gems/1.8/gems/heroku-1.8.3/bin/../lib/heroku/command.rb:14:in `run' from /opt/local/lib/ruby/gems/1.8/gems/heroku-1.8.3/bin/heroku:14 from /opt/local/bin/heroku:19:in `load' from /opt/local/bin/heroku:19 What should I do? Kevin

    Read the article

  • .NET COM Interop on Windows 7 64Bit gives me a headache

    - by Kevin Stumpf
    Hey guys, .NET COM interop so far always has been working quite nicely. Since I upgraded to Windows 7 I don't get my .NET COM objects to work anymore. My COM object is as easy as: namespace Crap { [ComVisible(true)] [Guid("2134685b-6e22-49ef-a046-74e187ed0d21")] [ClassInterface(ClassInterfaceType.None)] public class MyClass : IMyClass { public MyClass() {} public void Test() { MessageBox.Show("Finally got in here."); } } } namespace Crap { [Guid("1234685b-6e22-49ef-a046-74e187ed0d21")] public interface IMyClass { } } assembly is marked ComVisible as well. I register the assembly using regasm /codebase /tlb "path" registers successfully (admin mode). I tried regasm 32 and 64bit. Both time I get the error "ActiveX component cant create object Crap.MyClass" using this vbscript: dim objReg Set objReg = CreateObject("Crap.MyClass") MsgBox typename(objReg) fuslogvw doesn't give me any hints either. That COM object works perfectly on my Vista 32 Bit machine. I don't understand why I haven't been able to google a solution for that problem.. am I really the only person that ever got into that problem? Looking at OleView I see my object is registered successfully. I am able to create other COM objects as well.. it only does not work with my own ones. Thank you, Kevin

    Read the article

  • BackgroundWorker From ASP.Net Application

    - by Kevin
    We have an ASP.Net application that provides administrators to work with and perform operations on large sets of records. For example, we have a "Polish Data" task that an administrator can perform to clean up data for a record (e.g. reformat phone numbers, social security numbers, etc.) When performed on a small number of records, the task completes relatively quickly. However, when a user performs the task on a larger set of records, the task may take several minutes or longer to complete. So, we want to implement these kinds of tasks using some kind of asynchronous pattern. For example, we want to be able to launch the task, and then use AJAX polling to provide a progress bar and status information. I have been looking into using the BackgroundWorker class, but I have read some things online that make me pause. I would love to get some additional advice on this. For example, I understand that the BackgroundWorker will actually use the thread pool from the current application. In my case, the application is an ASP.Net web site. I have read that this can be a problem because when the application recycles, the background workers will be terminated. Some of the jobs I mentioned above may take 3 minutes, but others may take a few hours. Also, we may have several hundred administrators all performing similar operations during the day. Will the ASP.Net application thread pool be able to handle all of these background jobs efficiently while still performing it's normal request processing? So, I am trying to determine if using the BackgroundWorker class and approach is right for our needs. Should I be looking at an alternative approach? Thanks and sorry for such a long post! Kevin

    Read the article

  • Intern working for Indian NGO - Help with PHP 4, advising staff

    - by Kevin Burke
    Hello, For the past three months I've been working for an Indian NGO (http://sevamandir.org), doing some volunteer work in the field but also trying to improve their website, which needs a ton of work. Recently I've been trying to fix the "subscribe to newsletter" button, which is broken. I used filter_var to filter the email input, but when I tried to test this out I got an error. Then I learned that the web host is still using php version 4.3.2 and register_globals is turned on. I've mentioned that they should upgrade their web host before (they are paying around $50 per year for Rediff Web Hosting, complete with 100MB storage and 1 MySQL database). That would add a lot of complexity for the IT staff of 3, who would have to update everyone's email information (I assume? this is a 250-person organization), and have me find a new web host and teach them about it. The staff isn't that sophisticated about web usage - the head guy still uses IE6, and the website's laid out in tables (they use Dreamweaver WYSIWYG to lay out pages). So I've got two options - use regular expressions to filter the email, which I'm not that skilled at doing (and would be more vulnerable to exploitation after I leave), turn off register globals and then try to teach the staff what I'm doing, or try to get them to upgrade their versions of PHP and MySQL and/or change web host. I'd appreciate some advice. Thanks for your help, Kevin

    Read the article

  • Pass a data.frame column name to a function

    - by Kevin Middleton
    I'm trying to write a function to accept a data.frame (x) and a column from it. The function performs some calculations on x and later returns another data.frame. I'm stuck on the best-practices method to pass the column name to the function. The two minimal examples fun1 and fun2 below produce the desired result, being able to perform operations on x$column, using max() as an example. However, both rely on the seemingly (at least to me) inelegant (1) call to substitute() and possibly eval() and (2) the need to pass the column name as a character vector. fun1 <- function(x, column){ do.call("max", list(substitute(x[a], list(a = column)))) } fun2 <- function(x, column){ max(eval((substitute(x[a], list(a = column))))) } df <- data.frame(A = 1:20, B = rnorm(10)) fun1(df, "B") fun2(df, "B") I would like to be able to call the function as fun(df, B), for example. Other options I have considered but have not tried: Pass column as an integer of the column number. I think this would avoid substitute(). Ideally, the function could accept either. with(x, get(column)), but, even if it works, I think this would still require substitute Make use of formula() and match.call(), neither of which I have much experience with. Subquestion: Is do.call() preferred over eval()? Thanks, Kevin

    Read the article

  • Sanitizing user input that will later be e-mailed - what should I be worried about?

    - by Kevin Burke
    I'm interning for an NGO in India (Seva Mandir, http://sevamandir.org) and trying to fix their broken "subscribe to newsletter" box. Because the staff isn't very sophisticated and our web host isn't great, I decided to send the relevant data to the publications person via mail() instead of storing it in a MySQL database. I know that it's best to treat user input as malicious, and I've searched the SO forums for posts relevant to escaping user data for sending in a mail message. I know the data should be escaped; what should I be worried about and what's the best way to sanitize the input before emailing it? Form flow: 1. User enters email on homepage and clicks Submit 2. User enters name, address, more information on second page (bad usability, I know, but my boss asked me to) and clicks "Submit" 3. Collect the data via $_POST and email it to the publications editor (and possibly send a confirmation to the subscriber). I am going to sanitize the email in step 2 and the other data in step 3. I appreciate your help, Kevin

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >