Search Results

Search found 24766 results on 991 pages for 'information retrieval'.

Page 829/991 | < Previous Page | 825 826 827 828 829 830 831 832 833 834 835 836  | Next Page >

  • How to get encoding from MAPI message with PR_BODY_A tag (windows mobile)?

    - by SadSido
    Hi, everyone! I am developing a program, that handles incoming e-mail and sms through windows-mobile MAPI. The code basically looks like that: ulBodyProp = PR_BODY_A; hr = piMessage->OpenProperty(ulBodyProp, NULL, STGM_READ, 0, (LPUNKNOWN*)&piStream); if (hr == S_OK) { // ... get body size in bytes ... STATSTG statstg; piStream->Stat(&statstg, 0); ULONG cbBody = statstg.cbSize.LowPart; // ... allocate memory for the buffer ... BYTE* pszBodyInBytes = NULL; boost::scoped_array<BYTE> szBodyInBytesPtr(pszBodyInBytes = new BYTE[cbBody+2]); // ... read body into the pszBodyInBytes ... } That works and I have a message body. The problem is that this body is multibyte encoded and I need to return a Unicode string. I guess, I have to use ::MultiByteToWideChar() function, but how can I guess, what codepage should I apply? Using CP_UTF8 is naive, because it can simply be not in UTF8. Using CP_ACP works, well, sometimes, but sometimes does not. So, my question is: how can I retrieve the information about message codepage. Does MAPI provide any functions for it? Or is there a way to decode multibyte string, other than MultiByteToWideChar()? Thanks!

    Read the article

  • Recommendations for Continuous integration for Mercurial/Kiln + MSBuild + MSTest

    - by TDD
    We have our source code stored in Kiln/Mercurial repositories; we use MSBuild to build our product and we have Unit Tests that utilize MSTest (Visual Studio Unit Tests). What solutions exist to implement a continuous integration machine (i.e. Build machine). The requirements for this are: A build should be kicked of when necessary (i.e. code has changed in the Repositories we care about) Before the actual build, the latest version of the source code must be acquired from the repository we are building from The build must build the entire product The build must build all Unit Tests The build must execute all unit tests A summary of success/failure must be sent out after the build has finished; this must include information about the build itself but also about which Unit Tests failed and which ones succeeded. The summary must contain which changesets were in this build that were not yet in the previous successful (!) build The system must be configurable so that it can build from multiple branches(/Repositories). Ideally, this system would run on a single box (our product isn't that big) without any server components. What solutions are currently available? What are their pros/cons? From the list above, what can be done and what cannot be done? Thanks

    Read the article

  • How can I reorder an mbox file chronologically?

    - by Joshxtothe4
    Hello, I have a single spool mbox file that was created with evolution, containing a selection of emails that I wish to print. My problem is that the emails are not placed into the mbox file chronologically. I would like to know the best way to place order the files from first to last using bash, perl or python. I would like to oder by received for files addressed to me, and sent for files sent by me. Would it perhaps be easier to use maildir files or such? The emails currently exist in the format: From [email protected] Fri Aug 12 09:34:09 2005 Message-ID: <[email protected]> Date: Fri, 12 Aug 2005 09:34:09 +0900 From: me <[email protected]> User-Agent: Mozilla Thunderbird 1.0.6 (Windows/20050716) X-Accept-Language: en-us, en MIME-Version: 1.0 To: someone <[email protected]> Subject: Re: (no subject) References: <[email protected]> In-Reply-To: <[email protected]> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit Status: RO X-Status: X-Keywords: X-UID: 371 X-Evolution-Source: imap://[email protected]/ X-Evolution: 00000002-0010 Hey the actual content of the email someone wrote: > lines of quotedtext I am wondering if there is a way to use this information to easily reorganize the file, perhaps with perl or such.

    Read the article

  • Help me choose a web development framework/platform that will make me learn something

    - by Sergio Tapia
    I'm having a bit of an overload of information these past two days. I'm planning to start my own website that will allow local businesses to list their items on sale, and then users can come in and search for "Abercrombie t-shirt" and the stores that sell them will be listed. It's a neat little project I'm really excited for and I'm sure it'll take off, but I'm having problems from the get go. Sure I could use ASP.Net for it, I'm a bit familiar with it and the IDE for ASP.Net pages is bar-none, but I feel this is a great chance for me to learn something new to branch out a bit and not regurgitate .NET like a robot. I've been looking and asking around but it's all just noise and I can't make an educated decision. Can you help me choose a framework/platform that will make me learn something that's a nice thing to know in the job market, but also nice for me to grow as a professional? So far I've looked at: Ruby on Rails Kohana CakePHP CodeIgniter Symfony But they are all very esoteric to me, and I have trouble even finding out which IDE to use to that will let me use auto-complete for the proprietary keywords/methods. Thank you for your time.

    Read the article

  • making clean page via page.tpl.php

    - by user360051
    I have a Drupal module creating a page via hook_menu(). I am trying to make it so the page has no extraneous html output, only what I want. You can view the page here, http://www.thomashansen.me/chat/thomas. If you look at the source, you can see a strange script tag at the end. My page-chat.tpl.php looks like this, <?php // $Id$ ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="<?php print $language->language ?>" lang="<?php print $language->language ?>" dir="<?php print $language->dir ?>"> <head> </head> <body> <?php print $content; ?> </body> </html> Where is that script tag coming from? and how do I get rid of it? If you need more information just ask.

    Read the article

  • Is it possible to replace values ina queryset before sending it to your template?

    - by Issy
    Hi Guys, Wondering if it's possible to change a value returned from a queryset before sending it off to the template. Say for example you have a bunch of records Date | Time | Description 10/05/2010 | 13:30 | Testing... etc... However, based on the day of the week the time may change. However this is static. For example on a monday the time is ALWAYS 15:00. Now you could add another table to configure special cases but to me it seems overkill, as this is a rule. How would you replace that value before sending it to the template? I thought about using the new if tags (if day=1), but this is more of business logic rather then presentation. Tested this in a custom template tag def render(self, context): result = self.model._default_manager.filter(from_date__lte=self.now).filter(to_date__gte=self.now) if self.day == 4: result = result.exclude(type__exact=2).order_by('time') else: result = result.order_by('type') result[0].time = '23:23:23' context[self.varname] = result return '' However it still displays the results from the DB, is this some how related to 'lazy' evaluation of templates? Thanks! Update Responding to comments below: It's not stored wrong in the DB, its stored Correctly However there is a small side case where the value needs to change. So for example I have a From Date & To date, my query checks if todays date is between those. Now with this they could setup a from date - to date for an entire year, and the special cases (like mondays as an example) is taken care off. However if you want to store in the DB you would have to capture several more records to cater for the side case. I.e you would be capturing the same information just to cater for that 1 day when the time changes. (And the time always changes on the same day, and is always the same)

    Read the article

  • Why do C# containers and GUI classes use int and not uint for size related members ?

    - by smerlin
    I usually program in C++, but for school i have to do a project in C#. So i went ahead and coded like i was used to in C++, but was surprised when the compiler complained about code like the following: const uint size = 10; ArrayList myarray = new ArrayList(size); //Arg 1: cannot convert from 'uint' to 'int Ok they expect int as argument type, but why ? I would feel much more comfortable with uint as argument type, because uint fits much better in this case. Why do they use int as argument type pretty much everywhere in the .NET library even if though for many cases negative numbers dont make any sense (since no container nor gui element can have a negative size). If the reason that they used int is, that they didnt expect that the average user cares about signedness, why didnt they add overloads for uint additonally ? Is this just MS not caring about sign correctness or are there cases where negative values make some sense/ carry some information (error code ????) for container/gui widget/... sizes ?

    Read the article

  • Unable to create PDB file

    - by Ryan Smith
    For some reason this error started popping up today on one of my projects. Error 1 Unable to write to output file 'C:\MyProject\Release\MyProject.pdb': Unspecified error If I go into advanced compile options and change it to not generate and debug info, my project compiles fine. I have tried setting the permissions on the Release folder to full for everyone, so I would assume it's not a permissions issue. Also, I don't see anything in my log files that would provide me with more information about the issue. Does anyone know why this error would just start showing up or a way to fix it? Thanks. Update: I have rebooted my machine, restarted VS several times and have even completely deleted the existing OBJ file where the issue is happening. It's still giving me the same error. This is a simple one project solution that was working fine just last week. It appears to be an issue with VS trying to build the PDB file because I can delete them out of the Release and Debug folders without issue. When I try rebuilding them VS will start creating the file (about 1.4MB is size) but I still get the error.

    Read the article

  • File descriptor limits and default stack sizes

    - by Charles
    Where I work we build and distribute a library and a couple complex programs built on that library. All code is written in C and is available on most 'standard' systems like Windows, Linux, Aix, Solaris, Darwin. I started in the QA department and while running tests recently I have been reminded several times that I need to remember to set the file descriptor limits and default stack sizes higher or bad things will happen. This is particularly the case with Solaris and now Darwin. Now this is very strange to me because I am a believer in 0 required environment fiddling to make a product work. So I am wondering if there are times where this sort of requirement is a necessary evil, or if we are doing something wrong. Edit: Great comments that describe the problem and a little background. However I do not believe I worded the question well enough. Currently, we require customers, and hence, us the testers, to set these limits before running our code. We do not do this programatically. And this is not a situation where they MIGHT run out, under normal load our programs WILL run out and seg fault. So rewording the question, is requiring the customer to change these ulimit values to run our software to be expected on some platforms, ie, Solaris, Aix, or are we as a company making it to difficult for these users to get going? Bounty: I added a bounty to hopefully get a little more information on what other companies are doing to manage these limits. Can you set these pragmatically? Should we? Should our programs even be hitting these limits or could this be a sign that things might be a bit messy under the covers? That is really what I want to know, as a perfectionist a seemingly dirty program really bugs me.

    Read the article

  • Latex - Apply an operation to every character in a string

    - by hroest
    Hi I am using LaTeX and I have a problem concerning string manipulation. I want to have an operation applied to every character of a string, specifically I want to replace every character "x" with "\discretionary{}{}{}x". I want to do this because I have a long string (DNA) which I want to be able to separate at any point without hyphenation. Thus I would like to have a command called "myDNA" that will do this for me instead of inserting manually \discretionary{}{}{} after every character. Is this possible? I have looked around the web and there wasnt much helpful information on this topic (at least not any I could understand) and I hoped that you could help. --edit To clarify: What I want to see in the finished document is something like this: the dna sequence is CTAAAGAAAACAGGACGATTAGATGAGCTTGAGAAAGCCATCACCACTCA AATACTAAATGTGTTACCATACCAAGCACTTGCTCTGAAATTTGGGGACTGAGTACACCAAATACGATAG ATCAGTGGGATACAACAGGCCTTTACAGCTTCTCTGAACAAACCAGGTCTCTTGATGGTCGTCTCCAGGT ATCCCATCGAAAAGGATTGCCACATGTTATATATTGCCGATTATGGCGCTGGCCTGATCTTCACAGTCAT CATGAACTCAAGGCAATTGAAAACTGCGAATATGCTTTTAATCTTAAAAAGGATGAAGTATGTGTAAACC CTTACCACTATCAGAGAGTTGAGACACCAGTTTTGCCTCCAGTATTAGTGCCCCGACACACCGAGATCCT AACAGAACTTCCGCCTCTGGATGACTATACTCACTCCATTCCAGAAAACACTAACTTCCCAGCAGGAATT just plain linebreaks, without any hyphens. The DNA sequence will be one long string without any spaces or anything but it can break at any point. This is why my idea was to inesert a "\discretionary{}{}{}" after every character, so that it can break at any point without inserting any hyphens.

    Read the article

  • Detecting an online poker cheat

    - by Tom Gullen
    It recently emerged on a large poker site that some players were possibly able to see all opponents cards as they played through exploiting a security vulnerability that was discovered. A naïve cheater would win at an incredibly fast rate, and these cheats are caught very quickly usually, and if not caught quickly they are easy to detect through a quick scan through their hand histories. The more difficult problem occurs when the cheater exhibits intelligence, bluffing in spots they are bound to be called in, calling river bets with the worst hands, the basic premise is that they lose pots on purpose to disguise their ability to see other players cards, and they win at a reasonably realistic rate. Given: A data set of millions of verified and complete information hand histories Theoretical unlimited computer power Assume the game No Limit Hold'em, although suggestions on Omaha or limit poker may be beneficial How could we reasonably accurately classify these cheaters? The original 2+2 thread appeals for ideas, and I thought that the SO community might have some useful suggestions. It's an interesting problem also because it is current, and has real application in bettering the world if someone finds a creative solution, as there is a good chance genuine players will have funds refunded to them when identified cheaters are discovered.

    Read the article

  • MS Acess SQL - where clause to get year from date based on the year - data located in MS access form

    - by primus285
    OK, back again. I have a problem getting a drop down list to populate based on information in two fields. I have got the SQL correct as to Select just one year if I put DateValue('01/01/2001') in both places, but I am trying now to get it to grab the year data from the MS access form - another drop down named "cboYear". I'd hate to have to do something in VB, unless necessary. so far I have gotten this to pull up something (its always incorrect) SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date>=DateValue('01/01/' & [cboYear]) And Database_New.Date<=DateValue('12/31/' & [cboYear]); and SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date>=DateValue('01/01/' + [cboYear]) And Database_New.Date<=DateValue('12/31/' + [cboYear]); SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date>=DateValue('01/01/' AND [cboYear]) And Database_New.Date<=DateValue('12/31/' AND [cboYear]); both give errors saying that it is too complex to compute. Its probably something simple, but where do I go from here?

    Read the article

  • SQL Server service broker reporting as off when I have written a query to turn it on

    - by dotnetdev
    I have made a small ASP.NET website. It uses sqlcachedependency The SQL Server Service Broker for the current database is not enabled, and as a result query notifications are not supported. Please enable the Service Broker for this database if you wish to use notifications. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: The SQL Server Service Broker for the current database is not enabled, and as a result query notifications are not supported. Please enable the Service Broker for this database if you wish to use notifications. Source Error: Line 12: System.Data.SqlClient.SqlDependency.Start(connString); This is the erroneous line in my global.asax. However, in sql server (2005), I enabled service broker like so (I connect and run the SQL Server service when I debug my site): ALTER DATABASE mynewdatabase SET ENABLE_BROKER with rollback immediate And this was successful. What am I missing? I am trying to use sql caching dependency and have followed all procedures. Thanks

    Read the article

  • What are the common compliance standards for software products?

    - by Jay
    This is a very generic question about software products. I would like to know what compliance standards are applicable to any software product. I know that question gives away nothing. So, here is an example to what I am referring to. CiSecurity Security Certification/Compliance lists out products ceritified by them to be compliant to the standards published at their website, i.e, cisecurity.org. Compliance could be as simple as answering a questionnaire for your product and approved by a thirdparty like cisecurity or it could apply to your whole organization, for instance, PCI-DSS compliance. I would be very interested in knowing the standards that products you know/designed/created, comply to. To give you the context behind this question: I am the developer of a data-masking tool. The said tool helps mask onscreen html text in a banking web application using filters. So, for instance, if the bank application lists out user information with ssn, my product when integrated with the banking product, automatically identifies ssn pattern and masks it into a pre-defined format.So, I have my product marketing team wanting more buzz words like compliance to be able to sell it to more banking clients. Hence, understanding "compliances that apply to products" is a key research item for me at this point. By which I meant, security compliances. Appreciate all your help and suggestions.

    Read the article

  • To what extent should code try to explain fatal exceptions?

    - by Andrzej Doyle
    I suspect that all non-trivial software is likely to experience situations where it hits an external problem it cannot work around and thus needs to fail. This might be due to bad configuration, an external server being down, disk full, etc. In these situations, especially if the software is running in non-interactive mode, I expect that all one can really do is log an error and wait for the admin to read the logs and fix the problem. If someone happens to interact with the software in the meantime, e.g. a request comes in to a server that failed to initialize properly, then perhaps an appropriate hint can be given to check the logs and maybe even the error can be echoed (depending on whether you can tell if they're a technical guy as opposed to a business user). For the moment though let's not think too hard about this part. My question is, to what extent should the software be responsible for trying to explain the meaning of the fatal error? In general, how much competence/knowledge are you allowed to presume on administrators of the software, and how much should you include troubleshooting information and potential resolution steps when logging fatal errors? Of course if there's something that's unique to the runtime context this should definitely be logged; but lets assume your software needs to talk to Active Directory via LDAP and gets back an error "[LDAP: error code 49 - 80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 525, vece]". Is it reasonable to assume that the maintainers will be able to Google the error code and work out what it means, or should the software try to parse the error code and log that this is caused by an incorrect user DN in the LDAP config? I don't know if there is a definitive best-practices answer for this, so I'm keen to hear a variety of views.

    Read the article

  • Low Level Console Input

    - by Soulseekah
    I'm trying to send commands to to the input of a cmd.exe application using the low level read/write console functions. I have no trouble reading the text (scraping) using the ReadConsole...() and WriteConsole() functions after having attached to the process console, but I've not figured out how to write for example "dir" and have the console interpret it as a sent command. Here's a bit of my code: CreateProcess(NULL, "cmd.exe", NULL, NULL, FALSE, CREATE_NEW_CONSOLE, NULL, NULL, &si, &pi); AttachConsole(pi.dwProcessId); strcpy(buffer, "dir"); WriteConsole(GetStdHandle(STD_INPUT_HANDLE), buffer, strlen(buffer), &charRead, NULL); STARTUPINFO attributes of the process are all set to zero, except, of course, the .cb attribute. Nothing changes on the screen, however I'm getting an Error 6: Invalid Handle returned from WriteConsole to STD_INPUT_HANDLE. If I write to (STD_OUTPUT_HANDLE) I do get my dir written on the screen, but nothing of course happens. I'm guessing SetConsoleMode() might be of help, but I've tried many mode combinations, nothing helped. I've also created a quick console application that waits for input (scanf()) and echoes back whatever goes in, didn't work. I've also tried typing into the scanf() promp and then peek into the input buffer using PeekConsoleInput(), returns 0, but the INPUT_RECORD array is empty. I'm aware that there is another way around this using WriteConsoleInput() to directly inject INPUT_RECORD structured events into the console, but this would be way too long, I'll have to send each keypress into it. I hope the question is clear. Please let me know if you need any further information. Thanks for your help.

    Read the article

  • Application Design: Single vs. Multiple Hits to the DB

    - by shyneman
    I'm building a service that performs a set of configured activities based on the type of request that it receives. Each activity involves going to the database and retrieving/updating some kind of information. The logic for each activity can be generalized and re-used across different request types. The activities may need to participate in a transaction for the duration of the servicing the request. One option, I'm considering is having each activity maintain its own access to DAL/database. This fully encapsulates the activity into a stand-alone re-usable piece, but hitting the database multiple times for one request doesn't seem like a viable option. I don't really know how to easily implement the concept of a transaction across the multiple activities here either. The second option is to encapsulate ALL the activities into one big activity and hit the database once. But this does not allow re-use and configuration of these activities for different requests. Does anyone have any suggestions and input about what should be the best way to approach my problem? Thanks for any help.

    Read the article

  • Rails runner command not saving to cache

    - by mark
    Hi I'm having a bit of a problem with a cron task generated by rails whenever plugin that should store remote data in the rails cache for display. What I have is this: schedule.rb set :path, '/var/www/apps/tuexplore/current' every 1.hour do runner "Weather.cache_remote", :environment => :production end calls this model class Weather def self.cache_remote Rails.cache.write('weather_data', Net::HTTP.get_response(URI.parse(WEATHER_URL)).body) end end Calling whenever returns this PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/deploy/.gem/ruby/1.8/bin 0 * * * * /var/www/apps/tuexplore/current/script/runner -e production "Weather.cache_remote" This doesn't work. Calling the weather model method from a controller works fine, but I need to schedule it hourly. The cron task causes a "Cache write: weather_data" entry to appear in the production log but data isn't stored nor output into the page. Additional information, I can log into production console and run Weather.cache_remote, then read the data from the rails cache. I'd be really appreciative if someone could point out the error of my ways. If further explanation is needed please ask. Thanks in advance for any pointers.

    Read the article

  • How should I organize complex SQL views in Rails?

    - by Benjamin Oakes
    I manage a research database with Ruby on Rails. The data that is entered is primarily used by scientists who prefer to have all the relevant information for a study in one single massive table for use in their statistics software of choice. I'm currently presenting it as CSV, as it's very straightforward to do and compatible with the tools people want to use. I've written many views (the SQL kind, not the Rails HTML/ERB kind) to make the output they expect a reality. Some of these views are quite large and have a fair amount of complexity behind them. I wrote them in SQL because there are many calculations and comparisons that are more easily done with SQL. They're currently loaded into the database straight from a file named views.sql. To get the requested data, I do a select * from my_view;. The views.sql file is getting quite large. Part of the problem is that we're still figuring out what the data we collect means, so there's a lot of changes being made to the views all the time -- and a ton of them are being created. Many of them need to be repeatable. I've recently run into issues organizing and testing these views. Rails works great for user interface stuff and business logic, but I'm not aware of much existing structure for handling the reporting we require. Some options I've thought of: Should I move them into the most relevant models somehow? Several of the views interact with each other, which makes this situation more complex than just doing a single find_by_sql, so I don't know if they should only be part of the model. Perhaps they should be treated as a "view" in the MVC sense? (That is, they could be moved into app/views/ and live alongside the HTML, perhaps as files named something like my_view.csv.sql which return CSV.) How would you deal with a complex reporting problem like this?

    Read the article

  • Creating a complex tree model in Qt

    - by Zeke
    I'm writing an IRC Client (yes another one). Long story short. I'm writing a Server dialogue that keeps a list of this: Identity Networks Channels Addresses I have 3 different list views that will be for the Networks, Channels and Addresses. When the user changes the Identity (combo box). The network listview will lookup all the networks for that specific Identity. After it loads up the Networks it will automatically select the first network and then load all the channels and addresses for that specific network. The problem is I want to have 3 views for 1 model, to minimise all the memory and the loading of data. So that it makes it much easier to manage and not do a bunch of work. If you'd look at QColumnView it's the same exact thing. But I don't need it to be on one exact page since the views are on entirely different tabs to make it easier to go through the Server dialogue. I'm wondering what will be the best way to go about handling this complexity. The information is stored in a SQLite database. I already have the classes written to extract and store it. Just the modelling is the painful part of this solution.

    Read the article

  • MySQL managing catalogue views

    - by Mark Lawrence
    A friend of mine has a catalogue that currently holds about 500 rows or 500 items. We are looking at ways that we can provide reports on the catalogue inclduing the number of times an item was viewed, and dates for when its viewed. His site is averaging around 25,000 page impressions per month and if we assumed for a minute that half of these were catalogue items then we'd assume roughly 12,000 catalogue items viewed each month. My question is the best way to manage item views in the database. First option is to insert the catalogue ID into a table and then increment the number of times its viewed. The advantage of this is its compact nature. There will only ever be as many rows in the table as there are catalogue items. `catalogue_id`, `views` The disadvantage is that no date information is being held, short of maintaining the last time an item was viewed. The second option is to insert a new row each time an item is viewed. `catalogue_id`, `timestamp` If we continue with the assumed figure of 12,000 item views that means adding 12,000 rows to the table each month, or 144,000 rows each year. The advantage of this is we know the number of times the item is viewed, and also the dates for when its viewed. The disadvantage is the size of the table. Is a table with 144,000 rows becoming too large for MySQL? Interested to hear any thoughts or suggestions on how to achieve this. Thanks.

    Read the article

  • How important is it to use SSL on every page of your website?

    - by Mark
    Recently I installed a certificate on the website I'm working on. I've made as much of the site as possible work with HTTP, but after you log in, it has to remain in HTTPS to prevent session hi-jacking, doesn't it? Unfortunately, this causes some problems with Google Maps; I get warnings in IE saying "this page contains insecure content". I don't think we can afford Google Maps Premier right now to get their secure service. It's sort of an auction site so it's fairly important that people don't get charged for things they didn't purchase because some hacker got into their account. All payments are done through PayPal though, so I'm not saving any sort of credit card info, but I am keeping personal contact information. Fraudulent charges could be reversed fairly easily if it ever came to that. What do you guys suggest I do? Should I take the bulk of the site off HTTPS and just secure certain pages like where ever you enter your password, and that's it? That's what our competition seems to do.

    Read the article

  • ISBNs are used as primary key, now I want to add non-book things to the DB - should I migrate to EAN

    - by fish2000
    I built an inventory database where ISBN numbers are the primary keys for the items. This worked great for a while as the items were books. Now I want to add non-books. some of the non-books have EANs or ISSNs, some do not. It's in PostgreSQL with django apps for the frontend and JSON api, plus a few supporting python command-line tools for management. the items in question are mostly books and artist prints, some of which are self-published. What is nice about using ISBNs as primary keys is that in on top of relational integrity, you get lots of handy utilities for validating ISBNs, automatically looking up missing or additional information on the book items, etcetera, many of which I've taken advantage. some such tools are off-the-shelf (PyISBN, PyAWS etc) and some are hand-rolled -- I tried to keep all of these parts nice and decoupled, but you know how things can get. I couldn't find anything online about 'private ISBNs' or 'self-assigned ISBNs' but that's the sort of thing I was interested in doing. I doubt that's what I'll settle on, since there is already an apparent run on ISBN numbers. should I retool everything for EAN numbers, or migrate off ISBNs as primary keys in general? if anyone has any experience with working with these systems, I'd love to hear about it, your advice is most welcome.

    Read the article

  • getting error as Internal Server error

    - by Rishi2686
    Hi There, When I try to run my site it gives error as Internal Server error, when I refresh the page I get my result properly.The error page looks like this: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. I also checked error_log file on my server, it gives error as: [Sat Jun 12 01:21:55 2010] [error] [client 117.195.6.76] File does not exist: /home/rohit25/public_html/test/500.shtml, referer: http://www.test.mysite.com/home.php sometimes error can be; [Sat May 29 19:35:12 2010] [error] [client 97.85.189.208] File does not exist: /home2/carlton/public_html/test/favicon.ico Are there any changes required in configuration file, I also tried to involve this error code in custom error page, it shows error page, which could not resolve this issue. Your urgent help will be greatly appreciated.

    Read the article

  • object / class methods serialized as well?

    - by Mat90
    I know that data members are saved to disk but I was wondering whether object's/class' methods are saved in binary format as well? Because I found some contradictionary info, for example: Ivor Horton: "Class objects contain function members as well as data members, and all the members, both data and functions, have access specifiers; therefore, to record objects in an external file, the information written to the file must contain complete specifications of all the class structures involved." and: Are methods also serialized along with the data members in .NET? Thus: are method's assembly instructions (opcodes and operands) stored to disk as well? Just like a precompiled LIB or DLL? During the DOS ages I used assembly so now and then. As far as I remember from Delphi and the following site (answer by dan04): Are methods also serialized along with the data members in .NET? sizeof(<OBJECT or CLASS>) will give the size of all data members together (no methods/procedures). Also a nice C example is given there with data and members declared in one class/struct but at runtime these methods are separate procedures acting on a struct of data. However, I think that later class/object implementations like Pascal's VMT may be different in memory.

    Read the article

< Previous Page | 825 826 827 828 829 830 831 832 833 834 835 836  | Next Page >