Search Results

Search found 2907 results on 117 pages for 'ad lds'.

Page 25/117 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Automounting Active Directory home drives on a Linux server on login

    - by Ethan
    I've got a Centos 5.7 box authenticating against Active Directory through PBIS Open (the new LikeWise Open), which works well. Now, I'm trying to get the server to automount the user's AD home directory, located at //ad.server.dom/shares/home directories (Yeah, it's a space in the path. I didn't set this up). Each user has a directory in there with the same name as the user. I've tried to get pam_mount working, but it has a series of issues on RedHat and friends, and I can't seem to get that working. The directory does need to be automounted for the server to perform it's role. My reading on automount seems to suggest that there's no way to get it to do it's thing with authentication, though I'm happy to be proved wrong. I've looked at this resource, but it requires version RedHat (thus CentOS) 6 or higher, and newer packages than I have. I can manually (As root) mount the AD directory using the command mount.cifs "//ad.server.dom/Shares/home directories/testuser" /home/local/AD/testuser/nfs_mount/ -o username=testuser and when I log in as testuser, I can see all of the sample files in the nfs_share directory. Any tips towards the right direction would be highly appreciated. This is going to be on a server at a college, so it needs to be fairly stable, and would lead towards more Linux adoption there.

    Read the article

  • Web interface to allow users to change their Active Directory password

    - by csexton
    I have a few web applications that use Active Directory to authenticate. What I would like to be able to do is provide a simple web page that would allow users to update their AD password. This wasn't a problem when the majority of the users had windows machines that connected to this AD server (and could ctrl-alt-del to change the password), but we are moving away from that and the AD server is mostly for web apps. Is there a simple solution for this, or am I looking at the big LDAP managers?

    Read the article

  • Weird IIS with Windows Authentication + IE problem

    - by Paulius Maruška
    Hello. I have a website running on IIS and using Windows Authentication. All users that are configured to get access to the site are form a AD domain (not local users). In the properties of a Website, I have set to use the AD domain as the realm. Now, when using Firefox, Safari or Chrome - Everything is fine. When the user tries to open the site, he get's the login box. he enters simply "username" and "password" (let's pretend that it's an actual login and password :P) and he get's into the site. When using IE, however, things get nasty. When the user tries to open the site - he get's the login box. User enters the "username" and "password" again, but those get rejected! And when the second time login box pops up - it has the username filled in as "web-server-domain-name\username" which is wrong, because web-server-domain-name is not the domain where all users reside (it's "ad-domain"). I've spent days trying to figure out what's going on... Note, that if I manually enter "ad-domain\username" - I get accepted into the site without problems. So, my guess is that IE sends wrong username if domain is not specified. Anyway, IE is the only browser that triggers this behavior! Is it possible to do a server-side fix? Maybe it's possible to somehow auto-map the users to AD users? If it's not solvable server-side - is there a client-side fix for this? Thank you. PS: I'm more of a programmer than a sys-admin, so configuring servers isn't the strong side of mine... :P UPDATE: @Evan: Yes, "Digest authentication for Windows domain servers" is also enabled. @Eric: IIS version is 6.0. The authentication methods enabled are: Integrated and digest - all other methods are disabled. As for the security log. I looked at it, when doing "username" and "password" login in Chrome/Firefox and when doing "ad-domain\username" and "password" login from IE - the generated log messages are the same (I see no difference, anyway). When entering "username" and "password" I don't see any errors in the security (or any other) log, so can't tell what method it's trying to use. UPDATE 2: As suggested by Eric in the comments - I played around with Fiddler... While playing with it, I noticed, that when "username" and "password" is entered in FF and IE - the "Authorization" header value (encrypted) sent by IE is longer (almost two times) than one sent by FF. I tried to disable Windows Integrated authentication and only leave the Digest enabled - that fixed the problem (meaning, IE used the right realm just like other browsers), but that caused bazillion other problems with my site, because with Digest - user impersonation on the server doesn't work (that causes problems, when connecting to database etc). Any ideas?

    Read the article

  • Debian and active directory authentication

    - by Tobia
    I'm trying to link a debian server authentication to active directory. I followed this tutorial: http://wiki.debian.org/Authenticating_Linux_With_Active_Directory but I'm stuck on the getent passwd Because this doesn't list all AD users but only locals. This is my nsswitch.conf: passwd: files winbind group: files winbind shadow: files winbind And I'm sure it is well connected to AD becuse this: wbinfo -u Lists all AD users. What have I missed?

    Read the article

  • getent passwd fails, getent group works?

    - by slugman
    I've almost got my AD integration working completely on my OpenSUSE 12.1 server. I have a OpenSUSE 11.4 system successfully integrated into our AD environment. (Meaning, we use ldap to authenticate to AD directory via kerberos, so we can login to our *nix systems via AD users, using name service caching daemon to cache our passwords and groups). Also, important to note these systems are in our lan, ssl authentication is disabled. I am almost all the way there. Nss_ldap is finally authenticating with ldap server (as /var/log/messages shows), but right now, I have another problem: getent passwd & getent shadow fails (shows local accounts only), but getent group works! Getent group shows all my ad groups! I copied over the relavent configuration files from my working OpenSUSE 11.4 box: /etc/krb5.conf /etc/nsswitch.conf /etc/nscd.conf /etc/samba/smb.conf /etc/sssd/sssd.conf /etc/pam.d/common-session-pc /etc/pam.d/common-account-pc /etc/pam.d/common-auth-pc /etc/pam.d/common-password-pc I didn't modify anything between the two. I really don't think I need to modify anything, because getent passwd, getent shadow, and getent group all works fine on the OpenSUSE11.4 box. Attempting to restart nscd service unfortunately didn't do much, and niether did running /usr/sbin/nscd -i passwd. Do any of you admin-gurus have any suggestions? Honestly, I'm happy I made it this far. I'm almost there guys!

    Read the article

  • Weird IIS with Windows Authentication + IE problem

    - by Paulius Maruška
    I have a website running on IIS and using Windows Authentication. All users that are configured to get access to the site are form a AD domain (not local users). In the properties of a Website, I have set to use the AD domain as the realm. Now, when using Firefox, Safari or Chrome - Everything is fine. When the user tries to open the site, he get's the login box. he enters simply "username" and "password" (let's pretend that it's an actual login and password :P) and he get's into the site. When using IE, however, things get nasty. When the user tries to open the site - he get's the login box. User enters the "username" and "password" again, but those get rejected! And when the second time login box pops up - it has the username filled in as "web-server-domain-name\username" which is wrong, because web-server-domain-name is not the domain where all users reside (it's "ad-domain"). I've spent days trying to figure out what's going on... Note, that if I manually enter "ad-domain\username" - I get accepted into the site without problems. So, my guess is that IE sends wrong username if domain is not specified. Anyway, IE is the only browser that triggers this behavior! Is it possible to do a server-side fix? Maybe it's possible to somehow auto-map the users to AD users? If it's not solvable server-side - is there a client-side fix for this? Thank you. PS: I'm more of a programmer than a sys-admin, so configuring servers isn't the strong side of mine... :P UPDATE: @Evan: Yes, "Digest authentication for Windows domain servers" is also enabled. @Eric: IIS version is 6.0. The authentication methods enabled are: Integrated and digest - all other methods are disabled. As for the security log. I looked at it, when doing "username" and "password" login in Chrome/Firefox and when doing "ad-domain\username" and "password" login from IE - the generated log messages are the same (I see no difference, anyway). When entering "username" and "password" I don't see any errors in the security (or any other) log, so can't tell what method it's trying to use. UPDATE 2: As suggested by Eric in the comments - I played around with Fiddler... While playing with it, I noticed, that when "username" and "password" is entered in FF and IE - the "Authorization" header value (encrypted) sent by IE is longer (almost two times) than one sent by FF. I tried to disable Windows Integrated authentication and only leave the Digest enabled - that fixed the problem (meaning, IE used the right realm just like other browsers), but that caused bazillion other problems with my site, because with Digest - user impersonation on the server doesn't work (that causes problems, when connecting to database etc). Any ideas?

    Read the article

  • TLS_REQCERT and PHP with LDAPS

    - by John
    Problem: Secure LDAP queries via command-line and PHP to an AD domain controller with a self-signed certificate. Background: I am working on a project where I need to enable LDAP look-ups from a PHP web application to a MS AD domain controller that is using a self-signed certificate. This self-signed certificate is also using a domain name that is not a FQDN - think of something like people.campus as the domain name. The web application would take the user's credentials and pass them on to the AD domain controller to verify if the credntials are a match or not. This seems simple, but I am having problems trying to get PHP and the self-signed certificate to work. Some people have suggested that I changed the TLS_REQCERT variable from "request" to "never" within the OpenLDAP configuration. I am concerned that this might have larger implications such as a man-in-the-middle attack and I am not comfortable changing this setting to never. I have also read some places online where one can take a certificate and place it as a trusted source within the openldap configuration file. I am curious if that is something that I could do for the situation that I have? Can I, from the command line, obtain the self-signed certificate that the AD domain controller is using, save it to a file, and then have openldap use that file for the trust that it needs so that I do not need to adjust the variable from request to never? I do not have access to the AD domain controller and as a result cannot export the certificate. If there is a way to obtain the certificate from the command line, what commands do I need to use? Is there an alternate method of handling this issue that would be better in the long run? I have some CentOS servers and some Ubuntu servers that I am working with to try and get this going on. Thanks in advance for your help and ideas.

    Read the article

  • GPO startup script not copying files

    - by marcwenger
    I created a GPO startup script to execute for computers in a specific AD container. The script takes a file from the AD netlogon share and places it on a directory on the computer. Given the right permissions (ie: myself) can execute the script just fine and the file copies. But it doesn't work on startup - the file does not copy over from the AD server. The startup script should run as localsystem (am I right?). So the question is why do the files not copy on startup? Could it be because of: Is it permissions of the local system user? Reading the registry is problematic on startup? Obtaining files from the AD netlogon folder is problematic on startup? Am I missing it completely? My test machine does have the registry key and local directories as described in the script. I myself have standard user permissions on the test machine. AD server is Windows 2008, test client is Windows XP SP3 (and soon to be Windows 7, which I assume permissions issues will be inevitable) Dim wShell, fso, oraHome, tnsHome, key, srcDir Set wShell = WScript.CreateObject("WScript.Shell") Set fso = CreateObject("Scripting.FileSystemObject") key = "HKLM\Software\Oracle\Oracle_Home" On Error Resume Next orahome = wShell.RegRead(key) If err.Number = 0 Then tnsHome = oraHome + "\" + "network\admin\" srcDir = wShell.ExpandEnvironmentStrings("%logonserver%") + "\netlogon\UpdatedFiles\" fso.CopyFile srcDir + "file1.ext", tnsHome, true End If Side note: To ensure that the script is properly deployed, I purposely put some errors in the script, and on the next startup the error message appeared. So I know the GPO is deployed properly.

    Read the article

  • DNS caching server config problem

    - by Alex
    I have a Bind DNS caching-only server setup that is working. I am bringing up a new AD domain controller that will also be a DNS server for that AD but I don't want it responding to any DNS queries except those that are AD related. So, my goal is to leave this caching server as the primary DNS server for stations on the network and have it forward requests for the AD domain to the domain controller. My understanding is that I just need a forward zone for that domain pointing to the domain controller. However it does not seem to be working. So that leaves me to think that my caching server is not forwarding properly. For example, this AD is going to have a naming convention of hostname.mydomain.local. If I do an nslookup and specify the domain controller's IP address as the server, I can query addresses that exist in DNS on that server, such as dc1.mydomain.local. However, queries to my caching server times out (I get a response from the caching server if I query mydomain.local but none of the objects in that domain). Any suggestions? Here is my named.conf file: options { directory "/var/named"; listen-on { 192.168.0.14; 127.0.0.1; }; forwarders { ; ; }; forward first; }; zone "." in { type hint; file "db.cache"; }; zone "0.0.127.in-addr.arpa" in { type master; file "db.127.0.0"; }; //forward zone for mydomain.local zone "mydomain.local" { type forward; forwarders { 192.168.1.21; }; };

    Read the article

  • Issues related to storing photos in Active Directory?

    - by Joe_Jones_442Hemi
    Are their any known issues with storing employee photos in AD provided you store them in the compliant sizes and formats? Is there a critical mass that you break or could corrupt AD databases? I'm trying to understand some of the server teams deep concerns with our intent to store employee photos in AD... they fear it will corrupt the database or replication issues will occur globally, etc. We're about a 3,000 employee company.

    Read the article

  • Should the MAC Tables on a switch Stack be the same between sessions?

    - by Kyle Brandt
    According to Cisco's documentation: "The MAC address tables on all stack members are synchronized. At any given time, each stack member has the same copy of the address tables for each VLAN." However, when logged into the switch I see the following: ny-swstack01#show mac ad | inc Total Total Mac Addresses for this criterion: 222 ny-swstack01#ses 2 ny-swstack01-2#show mac ad | inc Total Total Mac Addresses for this criterion: 229 ny-swstack01-2#exit ny-swstack01#ses 3 ny-swstack01-3#show mac ad | inc Total Total Mac Addresses for this criterion: 229 ny-swstack01-3#exit ny-swstack01#ses 4 ny-swstack01-4#show mac ad | inc Total Total Mac Addresses for this criterion: 235 ny-swstack01-4#exit ny-swstack01#show mac ad | inc Total Total Mac Addresses for this criterion: 222 Going back and forth this isn't just because it is changing over time either, within certain sessions there are entries that I don't seen from the master session. We are currently waiting to hear back on CIsco from this, but has anyone run into this before? I stumbled upon this when looking into Unicast flooding, one of the hosts that is a destination MAC of flooding has a MAC entry that appears in session 3, but nowhere else. Also, I checked an all sessions show the same aging time.

    Read the article

  • Why will my AdRotator not display images? (Image paths are correct)

    - by KSwift87
    Hi. I'm writing a simple web application in C# and I've gotten to the part where I must add an AdRotator object and link four images to it. I have done this, but no matter what I do the images will not show up; only the alternate text. It makes no sense because the paths are correct. Supposedly AdRotator controls are really simple to use... But anyway below is my code. Search.aspx: <%@ Page Language="C#" MasterPageFile="~/Site.Master" AutoEventWireup="true" CodeBehind="Search.aspx.cs" Inherits="Module6.WebForm2" Title="Search" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" runat="server"> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <form id="Search" runat="server"> This is the Search page! <div class="StartCalendar"> <asp:Calendar ID="Calendar1" runat="server" Caption="Start Date" TodayDayStyle-Font-Bold="true" TodayDayStyle-ForeColor="Crimson" SelectedDayStyle-BackColor="DarkCyan" /> </div> <div class="EndCalendar"> <asp:Calendar ID="Calendar2" runat="server" Caption="End Date" TodayDayStyle-Font-Bold="true" TodayDayStyle-ForeColor="Crimson" SelectedDayStyle-BackColor="DarkCyan" /> </div> <br /><br /> <div class="Search"> <asp:Button ID="btnSearch" runat="server" Text="Search" UseSubmitBehavior="true" /> </div><br /><br /> <div class="CenterAd"> <asp:AdRotator ID="AdRotator1" runat="server" Target="_blank" AdvertisementFile="~/Advertisements.xml" /> </div> <br /><br /> <div class="Results"> <asp:GridView ID="gvResults" runat="server" /> </div> </form> </asp:Content> Advertisements.xml: <?xml version="1.0" encoding="utf-8" ?> <Advertisements> <Ad> <ImageURL>~/images/colts.jpg</ImageURL> <AlternateText>Colts Image</AlternateText> </Ad> <Ad> <ImageURL>~/images/conseco.gif</ImageURL> <AlternateText>Conseco Image</AlternateText> </Ad> <Ad> <ImageURL>~/images/IndianapolisIndians.png</ImageURL> <AlternateText>Indianapolis Indians Image</AlternateText> </Ad> <Ad> <ImageURL>~/images/pacers.gif</ImageURL> <AlternateText>Pacers Image</AlternateText> </Ad> </Advertisements> Any and all help is GREATLY appreciated.

    Read the article

  • upload file on database through php code

    - by ruhit
    hi all I have made an application to upload files and its workingout well.now I want to upload my files on database and I also want to display the uploaded files names on my listby accessing the database....so kindly help me. my codes are given below- function uploadFile() { global $template; //$this->UM_index = $this->session->getUserId(); switch($_REQUEST['cmd']){ case 'upload': $filename = array(); //set upload directory //$target_path = "F:" . '/uploaded/'; for($i=0;$i<count($_FILES['ad']['name']);$i++){ if($_FILES["ad"]["name"]) { $filename = $_FILES["ad"]["name"][$i]; $source = $_FILES["ad"]["tmp_name"][$i]; $type = $_FILES["ad"]["type"]; $name = explode(".", $filename); $accepted_types = array('text/html','application/zip', 'application/x-zip-compressed', 'multipart/x-zip', 'application/x-compressed'); foreach($accepted_types as $mime_type) { if($mime_type == $type) { $okay = true; break; } } $continue = strtolower($name[1]) == 'zip' ? true : false; if(!$continue) { $message = "The file you are trying to upload is not a .zip file. Please try again."; } $target_path = "F:" . '/uploaded/'.$filename; // change this to the correct site path if(move_uploaded_file($source, $target_path )) { $zip = new ZipArchive(); $x = $zip->open($target_path); if ($x === true) { $zip->extractTo("F:" . '/uploaded/'); // change this to the correct site path $zip->close(); unlink($target_path); } $message = "Your .zip file was uploaded and unpacked."; } else { $message = "There was a problem with the upload. Please try again."; } } } echo "Your .zip file was uploaded and unpacked."; $template->main_content = $template->fetch(TEMPLATE_DIR . 'donna1.html'); break; default: $template->main_content = $template->fetch(TEMPLATE_DIR . 'donna1.html'); //$this->assign_values('cmd','uploads'); $this->assign_values('cmd','upload'); } } my html page is <html> <link href="css/style.css" rel="stylesheet" type="text/css"> <!--<form action="{$path_site}{$index_file}" method="post" enctype="multipart/form-data">--> <form action="index.php?menu=upload_file&cmd=upload" method="post" enctype="multipart/form-data"> <div id="main"> <div id="login"> <br /> <br /> Ad No 1: <input type="file" name="ad[]" id="ad1" size="10" />&nbsp;&nbsp;Image(.zip)<input type="file" name="ad[]" id="ad1" size="10" /> Sponsor By : <input type="text" name="ad3" id="ad1" size="25" /> <br /> <br /> </div> </div> </form> </html>

    Read the article

  • Syntactic sugar in PHP with static functions

    - by Anna
    The dilemma I'm facing is: should I use static classes for the components of an application just to get nicer looking API? Example - the "normal" way: // example component class Cache{ abstract function get($k); abstract function set($k, $v); } class APCCache extends Cache{ ... } class application{ function __construct() $this->cache = new APCCache(); } function whatever(){ $this->cache->add('blabla'); print $this->cache->get('blablabla'); } } Notice how ugly is this->cache->.... But it gets waay uglier when you try to make the application extensible trough plugins, because then you have to pass the application instance to its plugins, and you get $this->application->cache->... With static functions: interface CacheAdapter{ abstract function get($k); abstract function set($k, $v); } class Cache{ public static $ad; public function setAdapter(CacheAdapter $a){ static::$ad = $ad; } public static function get($k){ return static::$ad->get($k); } ... } class APCCache implements CacheAdapter{ ... } class application{ function __construct(){ cache::setAdapter(new APCCache); } function whatever() cache::add('blabla', 5); print cache::get('blabla'); } } Here it looks nicer because you just call cache::get() everywhere. The disadvantage is that I loose the possibility to extend this class easily. But I've added a setAdapter method to make the class extensible to some point. I'm relying on the fact that I won't need to rewrite to replace the cache wrapper, ever, and that I won't need to run multiple application instances simultaneously (it's basically a site - and nobody works with two sites at the same time) So, am doing it wrong?

    Read the article

  • How to use AD/GPO/Print Services to "push out" a new printer driver to replace a broken one? How did my server get a broken driver?

    - by Zac B
    Context: We have an AD/GPO-managed corporate network with a little over a hundred PCs running Windows 7 x64, and a few managed printers. Our Server2008R2 primary domain controller is configured as a print server for them all. Problem: After a recent windows update and restart (no printer driver updates were included) on the DC, a particular shared printer (Lexmark T650) has begin exhibiting some strange behavior. First, it prints a preceding and following blank page for almost every document, on jobs submitted by about half of client machines (no separator page is configured on the server or any of the clients I've seen). Second, whenever someone tries to access "Printing Preferences" on any client, they recieve the following error message (this happens everywhere, 100% of the time, and didn't happen before the update on the DC): Once they click "OK", the prefs screen appears (with no separator page selected) and everything seems fine. I'm not even sure if these two issues are related, but everyone seems affected by one or both of those issues. What I've Tried: I've been hesitant to un-deploy the problem printer, or remove it via GPO, as it's pretty heavily used. I've tried updating (via MS update and our internal WSUS server) client machines and the DC. No printer driver updates have appeared, and no number of updates or restarts on the server or the client seems to have achieved anything other than my boss getting grumpy that I'm bouncing the domain controller so often. I've tried deleting the drivers on the server, and re-installing them from the original source that has worked for the past year...no change. I've tried selecting "New Driver" for one of the shared printers on a client machine, running as domain admin, and pushed the latest driver found by MSupdate back up to the DC. This changed the version number of the driver recorded in the print server manager, but caused no change--on the client I pushed from, or on any other. The error still appears. Question: Why the heck is this happening? Obviously, I got a bad driver from somewhere, but how do I get rid of it? I don't know of any "roll back drivers" functionality for centrally managed print drivers like Windows offers for other devices. How would I a) get this issue resolved on a client, and b) push the fix to the other members of the domain?

    Read the article

  • Making Money from your SQL Server Blog

    - by Bill Graziano
    My SQL Server blog reading list is around one hundred blogs.  Many people are writing great content and generating lots of page views.  I see some of them running Google AdSense and trying to make a little money off their traffic.  If you want to earn some some extra money from what you’ve written there are a couple of options.  And one new option that I’m announcing here. Background Internet advertising is sold based on a few different pricing schemes.  Flat Fee.  You offer either all your impressions (page views) or some percentage of your impressions in exchange for a flat monthly fee.  CPM or cost per thousand impressions.  If the quoted price is $2 CPM you’ll get $2 for every 1,000 times the ad is displayed.  While you might think the “M” means millions, the “M” in CPM is the roman numeral for 1,000. CPC or cost per click.  This is also called PPC or pay per click.  In this method you get paid based on how many clicks there are on the ad.  CPA or cost per action.  In this method you get paid based on an action that occurs on the advertisers site after they click on the ad.  This is typically some type of sign up form.  This is how most affiliate programs work. Darren Rowse at ProBlogger has been writing about blogging and making money off blogs for years.  He has a good introduction to making money on your blog in his “Making Money” section.  If you’re interested in learning more he has a post up titled How to Make More Money From Your Blog in the New Year that links to many of his best posts on the subject. Google AdSense This is the most common method for people earning money from their blogging.  It’s easy to setup and administer.  You tell AdSense what size ads you’d like to run and it gives you a little piece of JavaScript to put on your site.  AdSense quickly learns the topics you write about and displays ads that are appropriate for your site.  I typically see ads for hosting, SQL Server tools and developer tools running in AdSense slots.  AdSense pays on a CPC model.  If you translate that back to CPM pricing you’ll see rates from $0.50 to $1.00 CPM. Amazon While you might not make much money writing books it’s now possible to make even less helping Amazon sell them.  You can sign up for an Amazon affiliate program.  Each time you send Amazon a link and someone buys the book you get a cut of that sale.  This is the CPA model from above.  Amazon can help you build some pretty nice “stores”.  Here’s the SQL Server bookstore I built for SQLTeam.com.  If you’re just putting in a page with books like I’ve done on SQLTeam you should keep your expectations low.  If you’re writing book reviews of suggesting books on your blog it really does make sense to setup an Amazon affiliate link.  People are much more likely to buy a book based on a review from a trusted source.  I always try to buy through a referral link if there is one. Amazon pays about 4% of the price as a referral fee.  You also get credit for anything else they buy while on the site.  I recently had someone buy an iPod nano with their SQL Server book making me an extra $5.60 richer!  Estimating how much you can make is difficult though.  How much attention you draw to the links and book reviews can dramatically affect the earnings. Private Ad Sales This is the hardest but potentially most lucrative option.  You sell advertising directly to companies that want to sell things to your readers.  Typically this would be SQL Server tool vendors, hosting companies or anyone else that wants to make money off database administrators.  This is also the most difficult to do.  You’ll need the contacts at the companies and enough page views to make it worth their while.  You’ll also need software to track the page views and clicks, geo-target your ads and smooth out the impressions.  Your earnings are based on whatever you can negotiate with the companies. SQL Server Ad Network For the last couple of years I’ve run any extra ads that I sold on the SQLTeam Weblogs.  You can see an example of that on Mladen’s blog.  The ad in the upper right corner is one that I’m running for him.  (Note: Many of the ads I’m running are geo-targeted to only appear in English speaking countries.  You may see a different set of ads outside the US, Canada and the UK.  You can also see he has a couple of Google ads on his blog.)  When I run ads on his blog I split the advertising revenue with him.  They make a little and I make a little. I recently started to expand this and sell advertising specifically to run on SQL Server-related blogs.  I’m also starting to run ads on non-SQLTeam blogs.  The only way I can sell more advertising is to have more blogs to run it on.  And that’s where you come in. I’ve created a SQL Server advertising network.  I handle all the ad sales and provide the technology to serve the ads.  I handle collections and payments back to you.  You get paid at the end of each month regardless of when (or if) the advertiser actually pays.  All you need to do is add a small piece of JavaScript to your site to display the ads. If you’re writing about SQL Server and interested in earning a little money for your site I’d like to talk to you.  You can use the Contact Us page on SQLTeam.com to reach me.  Running advertising on your blog isn’t for everyone.  If you’re concerned about what advertisers might think about certain posts then you might not be a good fit.  For the most part this isn’t an issue.  You’ll also need to have a PayPal account to receive payments.  You probably won’t get rich doing this.  But you can earn extra cash on the side for doing what you would do anyway.  I do know that people have earned enough to buy themselves a nice laptop doing this. My initial target is blogs with more than 10,000 page views per month.  I expect to pay two to three times what Google pays.  If you have less than 10,000 page views per month but are still interested I’d still like to hear from you.  I may not be able to sign up smaller blogs right away but we’ll get the process started.  If you’re unsure about your traffic Google Analytics is a free tool that provides great reporting on traffic, popular posts and how people find your blog.  If you have any questions or are just curious drop me a line and I’ll try to answer your questions.

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario   Conventional Structures   Columnstore   Δ SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • Guest Post: Christian Finn: Is Facebook About to Become a Victim of its Own Success?

    - by Michael Snow
    12.00 Print 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Cambria; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}  Since we have a number of new members of the WebCenter Evangelist team - I thought it would be appropriate to close the week with the newest hire and leader of the global WebCenter Evangelists, Christian Finn, who has just joined the Red team after many years with the small technology company up in Redmond, WA. He gave an intro to himself in an earlier post this morning but his post below is a great example of how customer engagement takes on a life of its own in our global online connected and social digital ecosystem. Is Facebook About to Become a Victim of its Own Success? What if I told you that your brand could advertise so successfully, you wouldn’t have to pay for the ads? A recent campaign by Ford Motor Company for the Ford Focus featuring Doug the spokespuppet (I am not making this up) did just that—and it raises some interesting issues for marketers and social media alike in the brave new world of customer engagement that is the Social Web. Allow me to elaborate. An article in the Wall Street Journal last week—“Big Brands Like Facebook, But They Don’t Like to Pay” tells the story of Ford’s recently concluded online campaign for the 2012 Ford Focus. (Ford, by the way, under the leadership of people such as Scott Monty, has been a pioneer of effective social campaigns.) The centerpiece of the campaign was the aforementioned Doug, who appeared as a character on Facebook in videos and via chat. (If you are not familiar with Doug, you can see him in action here, and read the WSJ story here.) You may be thinking puppet ads are a sign of Internet Bubble 2.0 and want to stop now, but bear with me. The Journal reported that Ford spent about $95M on its overall Ford Focus campaign, with TV accounting for over $60M of that spend. The Internet buy for the campaign was just over $10M, which included ad buys to drive traffic to Facebook for people to meet and ‘Like’ Doug and some amount on Facebook ads, too, to promote Doug and by extension, the Ford Focus. So far, a fairly straightforward consumer marketing story in the Internet Era. Yet here’s the curious thing: once Doug reached 10,000 fans on Facebook, Ford stopped paying for Facebook ads. Doug had gone viral with people sharing his videos with one another; once critical mass was reached there was no need to buy more ads on Facebook. Doug went on to be Liked by over 43,000 people, and 61% of his fans said they would be more likely to consider buying a Focus. According to the article, Ford says Focus sales are up this year—and increasing sales is every marketer’s goal. And so in effect, Ford found its Facebook campaign so successful that it could stop paying for it, instead letting its target consumers communicate its messages for fun—and for free. Not only did they get a 3X increase in fans beyond their paid campaign, they had thousands of customers sharing their messages in video form for months. Since free advertising is the Holy Grail of marketing both old and new-- and it appears social networks have an advantage in generating that buzz—it seems reasonable to ask: what would happen to brands’ advertising strategies—and the media they use to engage customers, if this success were repeated at scale? It seems logical to conclude that, at least initially, more ad dollars would be spent with social networks like Facebook as brands attempt to replicate Ford’s success. Certainly Facebook ad revenues are on the rise—eMarketer expects Facebook’s ad revenues to quintuple by 2012 compared with 2009 levels, to nearly 2.9B. That’s bad news for TV and the already battered print media and good news for Facebook. But perhaps not so over the longer run. With TV buys, you have to keep paying to generate impressions. If Doug the spokespuppet is any guide, however, that may not be true for social media campaigns. After an initial outlay, if a social campaign takes off, the audience will generate more impressions on its own. Thus a social medium like Facebook could be the victim of its own success when it comes to ad revenue. It may be there is an inherent limiting factor in the ad spend they can capture, as exemplified by Ford’s experience with Dough and the Focus. And brands may spend much less overall on advertising, with as good or better results, than they ever have in the past. How will these trends evolve? Can brands create social campaigns that repeat Ford’s formula for the Focus with effective results? Can social networks find ways to capture more spend and overcome their potential tendency to make further spend unnecessary? And will consumers become tired and insulated from social campaigns, much as they have to traditional advertising channels? These are the questions CMOs and Facebook execs alike will be asking themselves in the brave new world of customer engagement. As always, your thoughts and comments are most welcome.

    Read the article

  • securing server to server http post

    - by ad-inf
    Website is developed on JSF, Servlet, using apache web server. In my website, I accept data submission from few restricted websites using HTTP POST method. We exchange some secure key to ensure that correct source is sending data. But is there any way to ensure that the data is submitted from specific domain / IP address only? In application level I can check request.header('Referer') , but some proxy or firewall might hide the referer. Can this configuration done on firewall or webserver level to authenticate server to server communication? Eg. Say my website is a payment gateway website, integrated with www.abc.com. I want only abc.com to submit data. So a user using abc.com should be able to submit data to my website only through abc.com, and not any other website.

    Read the article

  • What is a good network for full-page rich ads?

    - by Vishnu
    I'm currently developing a website where users will be able to upload content. I would like to be able to show a full-page ad whenever someone tries to view the content. The ad should take up most of the screen, and I should be able to have a "continue to the content --" link at the top. Preferably, I want something like what is currently on Forbes (if you haven't seen it, here: http://www.forbes.com/fdc/welcome.shtml but with an ad in the black area). Of course, the most revenue is the best. Thanks.

    Read the article

  • Setup IPv4 local on IPv6 VPS

    - by A.D.
    I have a dedicated server running multiple IPv6 only OpenVZ containers. I want them to be able to communicate with the IPv4 internet, but I realized that isn't going to be possible with IPv6 only. So they need to have an IPv4 address as well, not sure if a local address will work for it, but pretty sure it should. I added 169.254.1.100 in the container .conf file, but when I try to start it, I get this : Adding IP address(es): (the IPv6 address) 169.254.1.100 arpsend: 169.254.1.100 is detected on another computer : 00:04:9b:f2:b0:00 vps-net_add WARNING: arpsend -c 1 -w 1 -D -e 169.254.1.100 eth0 FAILED I did a lot of research, and searched serverfault before posting this, but found nothing relating to this.

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

  • CUDA: cudaMemcpy only works in emulation mode.

    - by Jason
    I am just starting to learn how to use CUDA. I am trying to run some simple example code: float *ah, *bh, *ad, *bd; ah = (float *)malloc(sizeof(float)*4); bh = (float *)malloc(sizeof(float)*4); cudaMalloc((void **) &ad, sizeof(float)*4); cudaMalloc((void **) &bd, sizeof(float)*4); ... initialize ah ... /* copy array on device */ cudaMemcpy(ad,ah,sizeof(float)*N,cudaMemcpyHostToDevice); cudaMemcpy(bd,ad,sizeof(float)*N,cudaMemcpyDeviceToDevice); cudaMemcpy(bh,bd,sizeof(float)*N,cudaMemcpyDeviceToHost); When I run in emulation mode (nvcc -deviceemu) it runs fine (and actually copies the array). But when I run it in regular mode, it runs w/o error, but never copies the data. It's as if the cudaMemcpy lines are just ignored. What am I doing wrong? Thank you very much, Jason

    Read the article

  • Android GridView with ads below

    - by ktambascio
    Hi, I'm trying to integrate ads (admob) into my Android app. It's mostly working, except for one issue with the layout. My layout consists of: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" xmlns:app="http://schemas.android.com/apk/res/com.example.photos"> <LinearLayout android:orientation="horizontal" android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/status_layout"> <ImageView android:id="@+id/cardStatus" android:layout_width="wrap_content" android:layout_height="wrap_content" /> <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="@string/hello" android:id="@+id/cardStatusText" /> </LinearLayout> <GridView android:id="@+id/imageGridView" android:layout_width="fill_parent" android:layout_height="fill_parent" android:padding="10dp" android:verticalSpacing="10dp" android:horizontalSpacing="10dp" android:numColumns="auto_fit" android:columnWidth="100dp" android:stretchMode="columnWidth" android:gravity="center" android:layout_below="@id/status_layout" /> <!-- Place an AdMob ad at the bottom of the screen. --> <!-- It has white text on a black background. --> <!-- The description of the surrounding context is 'Android game'. --> <com.admob.android.ads.AdView android:id="@+id/ad" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_alignParentBottom="true" app:backgroundColor="#000000" app:primaryTextColor="#FFFFFF" app:secondaryTextColor="#CCCCCC" app:keywords="Android Photo" /> </RelativeLayout> The ads are shown at the bottom of the screen, just as I want. However, they seem to be overlayed or drawn on top of the bottom portion of the grid view. I would like the gridview to be shorter, so that the ad can fill the bottom portion of the screen, and not hide part of the gridview. The problem is most annoying when you scroll all the way to the bottom of the gridview, and you still cannot fully see the last row items in the grid due to the ad. I'm not sure if this is the standard way that AdMob ads work. If this is the case, adding some padding to the bottom of the grid (if that's possible) would due the trick. That way the user can scroll a bit further, and see the last row in addition to the ad. I just switched from using LinearLayout to RelativeLayout after reading some similar issues with ListViews. Now my ad is along the bottom instead of above the grid, so I'm getting closer. Thoughts? -Kevin

    Read the article

  • Post Loading ads - using appendChild to move an IFRAME with text link ads

    - by Prem
    I have changed code around to basically load an add the bottom of the page in a hidden div and attached an onload event handler that called document.getElementById(xxx).appendChild() to take the hidden ad and move it into the right spot in my page. This works GREAT.. however when the ad is a text ad it AFTER i move the ad there is nothing in the rendered Iframe. I did tests to see what it looks like before i move it and sure enough the text links load in the IFRAME but the second i do the appendChild call to move the div that contains the ad i seem to loose the contents of the Iframe. Any ideas whats going on <div id="myad" style="display: none;"> GA_googleFillSlot("MyADSlotName"); </div> <script> window.onload = function() { // leader board document.getElementById('adplaceholder').appendChild(document.getElementById('myAd')); document.getElementById('myAd').style.display = ''; </script> UPDATE: I think what the problem here is that on text ads google writes to the iframe directly inserting the relevant text links where are on other ads it uses the iframe to just point to some src. seems like when i do the appendchild, if there is no "src" set for the iframe after the copy is done the iframe in the new location contains nothing... guess it does a reload on the src? Any way around this??

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >