Search Results

Search found 8687 results on 348 pages for 'per akerberg'.

Page 199/348 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • How does delete deal with pointer constness?

    - by aJ
    I was reading this question Deleting a const pointer and wanted to know more about delete behavior. Now, as per my understanding: delete expression works in two steps: invoke destructor then releases the memory (often with a call to free()) by calling operator delete. operator delete accepts a void*. As part of a test program I overloaded operator delete and found that operator delete doesn't accept const pointer. Since operator delete does not accept const pointer and delete internally calls operator delete, how does Deleting a const pointer work ? Does delete uses const_cast internally?

    Read the article

  • SQL Queries for Creating a rollback point and to rollback to that specific point

    - by Santhosha
    Hi, As per my project requirement i want to perform two operation Password Change Unlock Account(Only unlocking account, no password change!) I want return success only if both the transactions succeeds. Say if password change succeeds and unlock fails i cannot send success or failure. So i want to create a rollback point before password change, if both queries executes successfully i will commit the transaction. If one of the query fails i will discard the changes by rolling back to the rollback point. I am doing this in C++ using ADO. Is there any SQL Queries,using i can create the rollback point and reverting to rollback point and commiting the transaction I am using below commands for Password change ALTER LOGIN [username] WITH PASSWORD = N'password' for Unlock account ALTER LOGIN [%s] WITH CHECK_POLICY = OFF ALTER LOGIN [%s] WITH CHECK_POLICY = ON Thanks in advance!! Santhosh

    Read the article

  • Code reviews for larger MVC.NET team using TFS

    - by Parrots
    I'm trying to find a good code review workflow for my team. Most questions similar to this on SO revolve around using shelved changes for the review, however I'm curious about how this works for people with larger teams. We usually have 2-3 people working a story (UI person, Domain/Repository person, sometimes DB person). I've recommended the shelf idea but we're all concerned about how to manage that with multiple people working the same feature. How could you share a shelf between multiple programmers at that point? We worry it would be clunky and we might easily have unintended consequences moving to this workflow. Of course moving to shelfs for each feature avoids having 10 or so checkins per feature (as developers need to share code) making seeing the diffs at code review time painful. Has anyone else been able to successfully deal with this? Are there any tools out there people have found useful aside from shelfs in TFS (preferably open-source)?

    Read the article

  • How to export more than 1MB in XML format using sqlcmd and without an input file?

    - by jon
    Hello, In SQL Server 2008, I want to export the result of a stored procedure to a file using sqlcmd utility. Now the end of my stored procedure is a select statement with a "for xml path.." clause at the end. I read on BOL that if I don't want my output truncated when reaching 1MB file size, I have to use this :XML ON command, but it should be placed on its own line, before calling the stored procedure. Does any of you experts know if it is possible to do that without specifying an input file for sqlcmd? (I'm calling sqlcmd like this: exec master..xp_cmdshell 'sqlcmd -Q"exec storedProcedureName @param1=value1, @param2=value2" -o c:\exportResults.xml -h-1 -E', but "storedProcedureName" and its parameters can change, which would mean 1 input file per passed parameters to sqlcmd) Also, it seems that I can't use bcp instead of sqlcmd because my stored procedure is creating a temporary table and performing DML statements on it? Thanks a lot

    Read the article

  • Test Driven Development For Complex Methods involving external dependency

    - by bill_tx
    I am implementing a Service Contract for WCF Service. As per TDD I wrote a test case to just pass it using hardcoded values. After that I started to put real logic into my Service implementation. The actual logic relies on 3-4 external service and database. What should I do to my original test case that I wrote ? If i Keep it same in order to make test pass it will have to call several other external services. So I have question in general what should I do if I write a test case for a Business Facade first using TDD and later when I add real logic, if it involves external dependency.

    Read the article

  • Pro JavaScript programmer interview questions

    - by WooYek
    What are good questions to determine if applicant is really a pro JavaScript developer? Questions that can distinguish if someone is not an ad-hoc JavaScript programmer, but is really doing professional JavaScript development, object-oriented, reusable, and maintainable. Please provide answers, so an intermediate and ad-hoc JavaScript programmers can interview someone more experienced, coming up with answers to quite few of those advanced questions will elude me. Please avoid open questions. Please keep one interview question/answer per SO answer for better reading experience and easier interview preparation. It's possible duplicate, but there only questions and no answers (mostly): Advanced JavaScript Interview Questions What questions should every good JavaScript developer be able to answer?

    Read the article

  • Pro JavaScript programmer interview questions (with answers)

    - by WooYek
    What are good questions to determine if applicant is really a pro JavaScript developer? Questions that can distinguish if someone is not an ad-hoc JavaScript programmer, but is really doing professional JavaScript development, object-oriented, reusable, and maintainable. Please provide answers, so an intermediate and ad-hoc JavaScript programmers can interview someone more experienced, coming up with answers to quite few of those advanced questions will elude me. Please avoid open questions. Please keep one interview question/answer per SO answer for better reading experience and easier interview preparation. It's possible duplicate, but there only questions and no answers (mostly): Advanced JavaScript Interview Questions What questions should every good JavaScript developer be able to answer?

    Read the article

  • unarchiveObjectWithFile retain / autorelease needed?

    - by fuzzygoat
    Just a quick memory management question if I may ... Is the code below ok, or should I be doing a retain and autorelease, I get the feeling I should. But as per the rules unarchiveObjectWithFile does not contain new, copy or alloc. -(NSMutableArray *)loadGame { if([[NSFileManager defaultManager] fileExistsAtPath:[self pathForFile:@"gameData.plist"]]) { NSMutableArray *loadedGame = [NSKeyedUnarchiver unarchiveObjectWithFile:[self pathForFile:@"gameData.plist"]]; return loadedGame; } else return nil; } or -(NSMutableArray *)loadGame { if([[NSFileManager defaultManager] fileExistsAtPath:[self pathForFile:@"gameData.plist"]]) { NSMutableArray *loadedGame = [[NSKeyedUnarchiver unarchiveObjectWithFile:[self pathForFile:@"gameData.plist"]] retain]; return [loadedGame autorelease]; } else return nil; } gary

    Read the article

  • Mapping Absolute / Relative (Local) Paths to Absolute URLs

    - by Alix Axel
    I need a fast and reliable way to map an absolute or relative local path (say ./images/Mafalda.jpg) to it's corresponding absolute URL, so far I've managed to come up with this: function Path($path) { if (file_exists($path) === true) { return rtrim(str_replace('\\', '/', realpath($path)), '/') . (is_dir($path) ? '/' : ''); } return false; } function URL($path) { $path = Path($path); if ($path !== false) { return str_replace($_SERVER['DOCUMENT_ROOT'], getservbyport($_SERVER['SERVER_PORT'], 'tcp') . '://' . $_SERVER['HTTP_HOST'], $path); } return false; } URL('./images/Mafalda.jpg'); // http://domain.com/images/Mafalda.jpg Seems to be working as expected, but since this is a critical feature to my app I want to ask if anyone can spot any problem that I might have missed and optimizations are also welcome since I'm going to use this function several times per each request.

    Read the article

  • Hosting solution for images for website written in PHP

    - by tomaszs
    I've written a website in PHP and it will have ability for users to upload images. My website will have more than 100.000 users. Aprox. 1k users will upload image about 50 KB. And every image will be displayed on this website 5k times so it's transfer of: 1k x 50 KB x 5k = 250 GB per month. So my question is: Do you know any good solution (hosting or CDN network or else) that: will be payed for transfer not space used and no entrance fee will have API to upload images easily with PHP is extremely easy to use will be good for low budget will not require any special, complicated registration and formal things will allow commercial use will allow using this images in website layout ?

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • How to determine if a file will be logically moved or physically moved.

    - by Frederic Morin
    The facts: When a file is moved, there's two possibilities: The source and destination file are on the same partition and only the file system index is updated The source and destination are on two different file system and the file need to be moved byte per byte. (aka copy on move) The question: How can I determine if a file will be either logically or physically moved ? I'm transferring large files (700+ megs) and would adopt a different behaviors for each situation. Edit: I've already coded a moving file dialog with a worker thread that perform the blocking io call to copy the file a meg at a time. It provide information to the user like rough estimate of the remaining time and transfer rate. The problem is: how do I know if the file can be moved logically before trying to move it physically ?

    Read the article

  • iTunes Visualizer Plugin in C# - Energy Function

    - by James D
    Hi, iTunes Visualizer plugin in C#. Easiest way to compute the "energy level" for a particular sample? I looked at this writeup on beat detection over at GameDev and have had some success with it (I'm not doing beat detection per se, but it's relevant). But ultimately I'm looking for a stupid-as-possible, quick-and-dirty approach for demo purposes. For those who aren't familiar with how iTunes structures visualization data, basically you're given this: struct VisualPluginData { /* SNIP */ RenderVisualData renderData; UInt32 renderTimeStampID; UInt8 minLevel[kVisualMaxDataChannels]; // 0-128 UInt8 maxLevel[kVisualMaxDataChannels]; // 0-128 }; struct RenderVisualData { UInt8 numWaveformChannels; UInt8 waveformData[kVisualMaxDataChannels][kVisualNumWaveformEntries]; // 512-point FFT UInt8 numSpectrumChannels; UInt8 spectrumData[kVisualMaxDataChannels][kVisualNumSpectrumEntries]; }; Ideally, I'd like an approach that a beginning programmer with little to no DSP experience could grasp and improve on. Any ideas? Thanks!

    Read the article

  • What makes you come back to stackoverflow every day? [closed]

    - by rmarimon
    I know this is not a programming question. Let's try to label it a programming community question so that it doesn't get closed. I've been wondering what makes the programming community so prone to helping others in stackoverflow. Is this something particular to programmers? Do you think lawyers and accountants would help other lawyers and programmers as we do? What makes you come back to stackoverflow every day? It would be great to have an answer per reason so that we can get the list of reasons. In my case, I come to stackoverflow to ask questions that I can't solve quickly, and to test how good I am when answering questions. So far I’ve failed miserably at trying to answer questions but it has helped me understand how little I know.

    Read the article

  • SQL Server 2000 tables

    - by klork
    We currently have an SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: Does anyone have any experience with a SQL Server (2000/2005) installation with millions of tables? What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • What non-programming books should programmers read?

    - by Charles Roper
    This is a poll asking the Stackoverflow community what non-programming books they would recommend to fellow programmers. Please read the following before posting: Please post only ONE BOOK PER ANSWER. Please search for your recommendation on this page before posting (there are over NINE PAGES so it is advisable to check them all). Many books have already been suggested and we want to avoid duplicates. If you find your recommendation is already present, vote it up or add some commentary. Please elaborate on why you think a given book is worth reading from a programmer's perspective. This poll is now community editable, so you can edit this question or any of the answers. Note: this article is similar and contains other useful suggestions.

    Read the article

  • Sendmail to local domain ignoring MX records (part 2)

    - by FractalizeR
    Hello. I have the exact problem, like in this post: http://serverfault.com/questions/25068/sendmail-to-local-domain-ignoring-mx-records I am also using email provider like GMail For Your Domain (which stores your mail and manages it). I am sending mail from my server directly, but receiving mail is done via Yandex (email provider). Since the server hosts forum, I prefer to send mail directly from it because using another mail provider can slow things. Also, when I send 300.000 emails to my subscribers, email provider will surely block me thinking I send spam. My DNS zone now is: ; ; GSMFORUM.RU ; $TTL 1H gsmforum.ru. SOA ns1.hc.ru. support.hc.ru. ( 2009122268 ; Serial 1H ; Refresh 30M ; Retry 1W ; Expire 1H ) ; Minimum gsmforum.ru. NS ns1.hc.ru. gsmforum.ru. NS ns2.hc.ru. @ A 79.174.68.223 *.gsmforum.ru. CNAME @ ns1 A 79.174.68.223 ns2 A 79.174.68.224 @ MX 10 mx.yandex.ru. mail CNAME domain.mail.yandex.net. yamail-xxxxxxxxx CNAME mail.yandex.ru. Server hostname is server.gsmforum.ru. May be this is the cause? Can someone explain the reason of the matter (the rules that make sendmail consider domain to be local)? Can I easily change *.gsmforum.ru. CNAME @ into *.gsmforum.ru. A 79.174.68.224 to solve this problem? [root@server ~]# cat /etc/mail/local-host-names localhost localhost.localdomain This server hosts gsmforum.ru so I cannot put it into another domain like David Mackintosh suggests. Putting domain in mailertable doesn't solve the problem also. sendmail -bt still shows, that address is local. DontProbeInterfaces is also set to true at sendmail config. M4 file follows: divert(-1)dnl dnl # dnl # This is the sendmail macro config file for m4. If you make changes to dnl # /etc/mail/sendmail.mc, you will need to regenerate the dnl # /etc/mail/sendmail.cf file by confirming that the sendmail-cf package is dnl # installed and then performing a dnl # dnl # make -C /etc/mail dnl # include(`/usr/share/sendmail-cf/m4/cf.m4')dnl VERSIONID(`setup for linux')dnl OSTYPE(`linux')dnl dnl # dnl # Do not advertize sendmail version. dnl # dnl define(`confSMTP_LOGIN_MSG', `$j Sendmail; $b')dnl dnl # dnl # default logging level is 9, you might want to set it higher to dnl # debug the configuration dnl # dnl define(`confLOG_LEVEL', `9')dnl dnl # dnl # Uncomment and edit the following line if your outgoing mail needs to dnl # be sent out through an external mail server: dnl # dnl define(`SMART_HOST', `smtp.your.provider')dnl dnl # define(`confDEF_USER_ID', ``8:12'')dnl dnl define(`confAUTO_REBUILD')dnl define(`confTO_CONNECT', `1m')dnl define(`confTRY_NULL_MX_LIST', `True')dnl define(`confDONT_PROBE_INTERFACES',`True') define(`PROCMAIL_MAILER_PATH', `/usr/bin/procmail')dnl define(`ALIAS_FILE', `/etc/aliases')dnl define(`STATUS_FILE', `/var/log/mail/statistics')dnl define(`UUCP_MAILER_MAX', `2000000')dnl define(`confUSERDB_SPEC', `/etc/mail/userdb.db')dnl define(`confPRIVACY_FLAGS', `authwarnings,novrfy,noexpn,restrictqrun')dnl define(`confAUTH_OPTIONS', `A')dnl dnl # dnl # The following allows relaying if the user authenticates, and disallows dnl # plaintext authentication (PLAIN/LOGIN) on non-TLS links dnl # dnl define(`confAUTH_OPTIONS', `A p')dnl dnl # dnl # PLAIN is the preferred plaintext authentication method and used by dnl # Mozilla Mail and Evolution, though Outlook Express and other MUAs do dnl # use LOGIN. Other mechanisms should be used if the connection is not dnl # guaranteed secure. dnl # Please remember that saslauthd needs to be running for AUTH. dnl # dnl TRUST_AUTH_MECH(`EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl define(`confAUTH_MECHANISMS', `EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl # dnl # Rudimentary information on creating certificates for sendmail TLS: dnl # cd /usr/share/ssl/certs; make sendmail.pem dnl # Complete usage: dnl # make -C /usr/share/ssl/certs usage dnl # dnl define(`confCACERT_PATH', `/etc/pki/tls/certs')dnl dnl define(`confCACERT', `/etc/pki/tls/certs/ca-bundle.crt')dnl dnl define(`confSERVER_CERT', `/etc/pki/tls/certs/sendmail.pem')dnl dnl define(`confSERVER_KEY', `/etc/pki/tls/certs/sendmail.pem')dnl dnl # dnl # This allows sendmail to use a keyfile that is shared with OpenLDAP's dnl # slapd, which requires the file to be readble by group ldap dnl # dnl define(`confDONT_BLAME_SENDMAIL', `groupreadablekeyfile')dnl dnl # dnl define(`confTO_QUEUEWARN', `4h')dnl dnl define(`confTO_QUEUERETURN', `5d')dnl dnl define(`confQUEUE_LA', `12')dnl dnl define(`confREFUSE_LA', `18')dnl define(`confTO_IDENT', `0')dnl dnl FEATURE(delay_checks)dnl FEATURE(`no_default_msa', `dnl')dnl FEATURE(`smrsh', `/usr/sbin/smrsh')dnl FEATURE(`mailertable', `hash -o /etc/mail/mailertable.db')dnl FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl FEATURE(redirect)dnl FEATURE(always_add_domain)dnl FEATURE(use_cw_file)dnl FEATURE(use_ct_file)dnl dnl # dnl # The following limits the number of processes sendmail can fork to accept dnl # incoming messages or process its message queues to 20.) sendmail refuses dnl # to accept connections once it has reached its quota of child processes. dnl # dnl define(`confMAX_DAEMON_CHILDREN', `20')dnl dnl # dnl # Limits the number of new connections per second. This caps the overhead dnl # incurred due to forking new sendmail processes. May be useful against dnl # DoS attacks or barrages of spam. (As mentioned below, a per-IP address dnl # limit would be useful but is not available as an option at this writing.) dnl # dnl define(`confCONNECTION_RATE_THROTTLE', `3')dnl dnl # dnl # The -t option will retry delivery if e.g. the user runs over his quota. dnl # FEATURE(local_procmail, `', `procmail -t -Y -a $h -d $u')dnl FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl FEATURE(`blacklist_recipients')dnl EXPOSED_USER(`root')dnl dnl # dnl # For using Cyrus-IMAPd as POP3/IMAP server through LMTP delivery uncomment dnl # the following 2 definitions and activate below in the MAILER section the dnl # cyrusv2 mailer. dnl # dnl define(`confLOCAL_MAILER', `cyrusv2')dnl dnl define(`CYRUSV2_MAILER_ARGS', `FILE /var/lib/imap/socket/lmtp')dnl dnl # dnl # The following causes sendmail to only listen on the IPv4 loopback address dnl # 127.0.0.1 and not on any other network devices. Remove the loopback dnl # address restriction to accept email from the internet or intranet. dnl # DAEMON_OPTIONS(`Name=MTA,Port=smtp') dnl # dnl # The following causes sendmail to additionally listen to port 587 for dnl # mail from MUAs that authenticate. Roaming users who can't reach their dnl # preferred sendmail daemon due to port 25 being blocked or redirected find dnl # this useful. dnl # dnl DAEMON_OPTIONS(`Port=submission, Name=MSA, M=Ea')dnl dnl # dnl # The following causes sendmail to additionally listen to port 465, but dnl # starting immediately in TLS mode upon connecting. Port 25 or 587 followed dnl # by STARTTLS is preferred, but roaming clients using Outlook Express can't dnl # do STARTTLS on ports other than 25. Mozilla Mail can ONLY use STARTTLS dnl # and doesn't support the deprecated smtps; Evolution <1.1.1 uses smtps dnl # when SSL is enabled-- STARTTLS support is available in version 1.1.1. dnl # dnl # For this to work your OpenSSL certificates must be configured. dnl # dnl DAEMON_OPTIONS(`Port=smtps, Name=TLSMTA, M=s')dnl dnl # dnl # The following causes sendmail to additionally listen on the IPv6 loopback dnl # device. Remove the loopback address restriction listen to the network. dnl # dnl DAEMON_OPTIONS(`port=smtp,Addr=::1, Name=MTA-v6, Family=inet6')dnl dnl # dnl # enable both ipv6 and ipv4 in sendmail: dnl # dnl DAEMON_OPTIONS(`Name=MTA-v4, Family=inet, Name=MTA-v6, Family=inet6') dnl # dnl # We strongly recommend not accepting unresolvable domains if you want to dnl # protect yourself from spam. However, the laptop and users on computers dnl # that do not have 24x7 DNS do need this. dnl # FEATURE(`accept_unresolvable_domains')dnl dnl # dnl FEATURE(`relay_based_on_MX')dnl dnl # dnl # Also accept email sent to "localhost.localdomain" as local email. dnl # LOCAL_DOMAIN(`localhost.localdomain')dnl dnl # dnl # The following example makes mail from this host and any additional dnl # specified domains appear to be sent from mydomain.com dnl # dnl MASQUERADE_AS(`mydomain.com')dnl dnl # dnl # masquerade not just the headers, but the envelope as well dnl # dnl FEATURE(masquerade_envelope)dnl dnl # dnl # masquerade not just @mydomainalias.com, but @*.mydomainalias.com as well dnl # dnl FEATURE(masquerade_entire_domain)dnl dnl # dnl MASQUERADE_DOMAIN(localhost)dnl dnl MASQUERADE_DOMAIN(localhost.localdomain)dnl dnl MASQUERADE_DOMAIN(mydomainalias.com)dnl dnl MASQUERADE_DOMAIN(mydomain.lan)dnl MAILER(smtp)dnl MAILER(procmail)dnl dnl MAILER(cyrusv2)dnl FEATURE(`dnsbl',`zen.spamhaus.org',`Rejected - your IP is blacklisted by http://www.spamhaus.org')

    Read the article

  • Technologies used in Remote Administration applications(not RD)

    - by Michael
    I want to know what kind of technologies are used nowadays as underlying screen capture engine for remote administration software like VNC pcAnywhere TeamViewer RAC Remote Administrator etc.. The programming language is not so important as just to know whether a driver needs to be developed which is polling video memory 30 times per second or there are any com objects built in the Windows kernel to help doing this? I'm not interested in 3rd party components for doing this. Do I have to use DirectX facilities? Just want some start point to develop my own screen stream capture engine, which will be less CPU hog.

    Read the article

  • How te execute with /bin/false shell

    - by Amar
    Hello I am trying to setup per-user fastcgi scripts that will run each on different port and with different user. Here is example of my script: #!/bin/bash BIND=127.0.0.1:9001 USER=user PHP_FCGI_CHILDREN=2 PHP_FCGI_MAX_REQUESTS=10000 etc... However, if I add user with /bin/false (which I want, since this is about to be something like shared hosting and I dont want users to have shell access), the script is run'd under 1001, 1002 'user' which, as I googled, might be security hole. My question is: Is it possible to allow user(s) execute shell scripts but disable them to log in via SSH ? Thank you

    Read the article

  • Using Django Admin for a custom database solution

    - by Prashanth Ellina
    A client wants to have a simple intranet application to manage his process. He runs a Quarry and wishes to track number of loads delivered per day and associated activities. Since I knew about Django's excellent Admin interface, I figured I could define the "Schema" in models.py and have Django Admin generate the forms. I did exactly that and the result is not bad at all. I've been able to customize the look and feel to suit the client's taste. Some questions: Is Django Admin the right choice for such a use-case? Will I run to problems in the future due to flexibility of the framework? Is there a better framework out there specifically designed for this use-case (general Database management for small businesses)? I prefer ones written in Python since I can hack it up to customize. Thanks!

    Read the article

  • Share data between usercontrols

    - by toraan
    On my page I have n-userControls (same control) I need to communicate between them(to be more specific I need to pass one in value) . I don't want to involve the hosting page for that. The controls acts as "pagers" and interact with the paged data on the hostin page via events that hosting page is subscribed to. So when user click on one of the pager and changes it's state, the other control should know about it and change itself accordingly. I can not use VieState because viewstate is per control and so is the controlstate. Can I use Session for that? (session is shared and there is only one value that i need to store) Or maybe there is something better I can use? (no QueryString)

    Read the article

  • Convert Chunk of Data into Tabular Format Using Perl

    - by neversaint
    I have a data that looks like this 1:SRX000566 Submitter: WoldLab Study: RNASeq expression profiling for ENCODE project(SRP000228) Sample: Human cell line GM12878(SRS000567) Instrument: Solexa 1G Genome Analyzer Total: 4 runs, 62.7M spots, 2.1G bases Run #1: SRR002055, 11373440 spots, 375323520 bases Run #2: SRR002063, 22995209 spots, 758841897 bases Run #3: SRR005091, 13934766 spots, 459847278 bases Run #4: SRR005096, 14370900 spots, 474239700 bases 2:SRX000565 Submitter: WoldLab Study: RNASeq expression profiling for ENCODE project(SRP000228) Sample: Human cell line GM12878(SRS000567) Instrument: Solexa 1G Genome Analyzer Total: 3 runs, 51.2M spots, 1.7G bases Run #1: SRR002052, 12607931 spots, 416061723 bases Run #2: SRR002054, 12880281 spots, 425049273 bases Run #3: SRR002060, 25740337 spots, 849431121 bases 3:SRX012407 Submitter: GEO Study: GSE17153: Illumina sequencing of small RNAs from C. elegans embryos(SRP001363) Sample: Caenorhabditis elegans(SRS006961) Instrument: Illumina Genome Analyzer II Total: 1 run, 3M spots, 106.8M bases Run #1: SRR029428, 2965597 spots, 106761492 bases Is there a compact way to convert them into tabular format (tab separated). Hence 1 entry/row per chunk. In these case 3 rows. I tried this but doesn't seem to work. perl -laF/\n/ -000ne"print join chr(9),@F" myfile.txt

    Read the article

  • Most useful Rails plugins, Ruby libraries and Ruby gems?

    - by Srinivas Iyer
    I have seen many sites which provide the whole list of Rails plugins, Ruby libraries and Ruby gems, but we hardly use few of them and some may not suit our requirement and we spend a whole lot of time searching for useful plugins which suits our requirement. I have created this poll, people can post useful libraries, gems and plugins which they have come across. It would be great help for newbies like me and to the entire Ruby on Rails community. Note: to keep this poll as useful as possible, please remember: Post only one library, gem, or plugin per answer Mention the name of the library, gem, or plugin which you find it useful. URL of the location of the resource We don't want duplicate answers, so before posting check if the library has been mentioned already. Thanks

    Read the article

  • how to get game sound or how to create good sound for iPhone

    - by iPhone Fun
    Hi all, I am having problem of getting sound. i.e. how to get good sounds for our iPhone games? or how to make some good quality sounds for iPhone game applications so that the applications will become some more attractive? There are many sites like www.freesound.org , but I am not able to get good sounds as per my choice. If any one know from where we can get good sounds or how to make such things please help me. As this will be help full to all for using sounds in the applications of iPhone or other. Thanks in advance and Sorry for my BAD English.

    Read the article

  • Slow insert speed in Postgresql memory tablespace

    - by Prashant
    Hi, I have a requirement where I need to store the records at rate of 10,000 records/sec into a database (with indexing on a few fields). Number of columns in one record is 25. I am doing a batch insert of 100,000 records in one transaction block. To improve the insertion rate, I changed the tablespace from disk to RAM.With that I am able to achieve only 5,000 inserts per second. I have also done the following tuning in the postgres config: Indexes : no fsync : false logging : disabled Other information: - Tablespace : RAM - Number of columns in one row : 25 (mostly integers) - CPU : 4 core, 2.5 GHz - RAM : 48 GB I am wondering why a single insert query is taking around 0.2 msec on average when database is not writing anything on disk (as I am using RAM based tablespace). Is there something I am doing wrong? Help appreciated. Prashant

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >