Search Results

Search found 9332 results on 374 pages for 'an original alias'.

Page 142/374 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • What are the possible disadvantages of enabling the "data access" server option in sys.servers for t

    - by Corp. Hicks
    We plan to change the default server options of an SQL2k5 server instance by enabling data access. The reason is that we want to run "SELECT * FROM OPENQUERY(LOCALSERVER, '...')" -like statements on the server. What are the possible disadvantages of enabling server option "data access" (alias sys.servers.is_data_access_enabled) for the local server (sys.servers.server_id = 0)? (There must be a reason for MS setting this option to disabled by default...) EDIT: it turns out that I'm not the first person to ask this question: http://sqlblogcasts.com/blogs/piotr_rodak/archive/2009/11/22/data-access-setting-on-local-server.aspx "The DATA ACCESS server option is not very well documented in my opinion - the Books On Line say it is a property of linked servers. It doesn't mention at all that you actually can have it enabled on your local server to enable OPENQUERY calls. I noticed that when you disable DATA ACCESS on a linked server, you can't query any table located on it (I tested it on my loopback server) neither using OPENQUERY nor four-part naming convention. You can still call procedures (with four-part naming) that return rowsets. Well, the interesting question is why it is disabled by default on local server - I suppose to discourage users from using OPENQUERY against it." It also seems that the author of the post (Pjotr Rodak) is a Stack Overflow user :-)

    Read the article

  • Visual Studio: Lost CD

    - by JamesBrownIsDead
    I have my original CD housing for my copy of Visual Studio 2008 Standard. Therefore I still have my key. I have trial versions of 2008 and 2010, and Express versions of both, but can't find a place to enter my CD key. What am I suppose to do? I lost my CD. Should I just call MSFT and ask them what to do?

    Read the article

  • Random loading swf files into main swf with countdown timer

    - by Plugger
    ok guys a question, can this be done or has it already been done, or can someone point me in the right direction (be aware i an a total newbie with action script) i have a main swf movie about 600px x 400 px what i want is this to run for say 30 seconds, then i want a countdown timer in the bottom corner for say 10 seconds, after this i want it to randomly load another swf file over the top of the original, and then the timer repeats for 10 seconds and repates the random loading of another swf file over the top etc, etc, etc so whats the best way around this

    Read the article

  • Can you cross-site ping another site using C# or JS/Ajax?

    - by Josh Harris
    On our web application I am trying to ping a 3rd party site to see if it is up before redirecting our customers to it. So far I have not seen a way to do this other than from a desktop app or system console. Is this possible? I have heard that there was an image trick in original ASP. Currently we are using .NET MVC with Javascript. Thank you, Josh

    Read the article

  • Sendmail : Mail delivery of same domain to internal or external mail server.

    - by Silkograph
    Its bit difficult to explain but very simple problem. We have internal sendmail server and hosted server. Both are set to same domain name. We have mixed mail accounts. For example we have two user in one office. [email protected] is local only [email protected] is internal plus external. Internal means we create user on local linux box where sendmail is set. External means we create user on local and hosted server. [email protected] can send mails to any internal user created on Linux box where sendmail is installed. But he can not send mail to outside domain and no mail can be sent to him as there is no account created on external hosted server. [email protected] can send mails to internal as well as all other domains through sendmail's smart_host feature, which uses hosted server's smtp. [email protected] can get all external emails internally through Fetchmail on linux box. Now we have third user [email protected] who will be always outstation and can use external server only. So I can not create account on local linux box for [email protected] because his mail will get delivered locally only. I don't want to create alias and send his mails to gmail or yahoo's account. I want to send emails to his external account from any internal user. How this can be done? Thanks in advance.

    Read the article

  • added ip-based virtual host to sites-available and created symlink to sites-enabled...but new domain

    - by lililili
    I added ip-based virtual host to sites-availble and created symlink to sites-enabled, but new domain times out. When i navigate to mynewdomain.com it says connection timed out. NameVirtualHost 12.12.12.12 <VirtualHost 12.12.12.12> ServerAdmin webmaster@localhost ServerName newdomain.com DocumentRoot /var/www/newdomain.com <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost>

    Read the article

  • Trying to set the message-id, in-reply-to, etc... in ActionMailer

    - by Ryan
    I'm working on an app that needs to be able to send out email updates and then route the reply back to the original item. All emails will come to a single address (this can't change unfortunately), and I need to be able to determine where they go. My initial thought was setting the message-id for the item so that it comes back as a References header. any ideas on how to accomplish this with ActionMailer?

    Read the article

  • Proper way to use Linq with WPF

    - by Ingó Vals
    I'm looking for a good guide into the right method of using Linq to Sql together with WPF. Most guides only go into the bare basics like how to show data from a database but noone I found goes into how to save back to the database. Can you answer or point out to me a guide that can answer these questions. I have a separate Data project because the same data will also be used in a web page so I have the repository method. That means I have a seperate class that uses the DataContext and there are methods like GetAllCompanies() and GetCompanyById ( int id ). 1) Where there are collections is it best to return as a IQueryable or should I return a list? Inside the WPF project I have seen reccomendations to wrap the collection in a ObservabgleCollection. 2) Why should I use ObservableCollection and should I use it even with Linq / IQueryable Some properties of the linq entities should be editable in the app so I set them to two-way mode. That would change the object in the observableCollection. 3) Is the object in the ObservableCollection still a instance of the original linq entity and so is the change reflected in the database ( when submitchanges is called ) I should have somekind of save method in the repository. But when should I call it? What happens if someone edits a field but decides not to save it, goes to another object and edits it and then press save. Doesn't the original change also save? When does it not remember the changes to a linq entity object anymore. Should I instance the Datacontext class in each method so it loses scope when done. 4) When and how to call the SubmitChanges method 5) Should I have the DataContext as a member variable of the repository class or a method variable To add a new row I should create a new object in a event ( "new" button push ) and then add it to the database using a repo method. 6) When I add the object to the database there will be no new object in the ObservableCollection. Do I refresh somehow. 7) I wan't to reuse the edit window when creating new but not sure how to dynamically changing from referencing selected item from a listview to this new object. Any examples you can point out.

    Read the article

  • Basic Java Multi-Threading Question

    - by Veered
    When an object is instantiated in Java, is it bound to the thread that instantiated in? Because when I anonymously implement an interface in one thread, and pass it to another thread to be run, all of its methods are run in the original thread. If they are bound to their creation thread, is there anyway to create an object that will run in whatever thread calls it?

    Read the article

  • gcc/g++: error when compiling large file

    - by Alexander
    Hi, I have a auto-generated C++ source file, around 40 MB in size. It largely consists of push_back commands for some vectors and string constants that shall be pushed. When I try to compile this file, g++ exits and says that it couldn't reserve enough virtual memory (around 3 GB). Googling this problem, I found that using the command line switches --param ggc-min-expand=0 --param ggc-min-heapsize=4096 may solve the problem. They, however, only seem to work when optimization is turned on. 1) Is this really the solution that I am looking for? 2) Or is there a faster, better (compiling takes ages with these options acitvated) way to do this? Best wishes, Alexander Update: Thanks for all the good ideas. I tried most of them. Using an array instead of several push_back() operations reduced memory usage, but as the file that I was trying to compile was so big, it still crashed, only later. In a way, this behaviour is really interesting, as there is not much to optimize in such a setting -- what does the GCC do behind the scenes that costs so much memory? (I compiled with deactivating all optimizations as well and got the same results) The solution that I switched to now is reading in the original data from a binary object file that I created from the original file using objcopy. This is what I originally did not want to do, because creating the data structures in a higher-level language (in this case Perl) was more convenient than having to do this in C++. However, getting this running under Win32 was more complicated than expected. objcopy seems to generate files in the ELF format, and it seems that some of the problems I had disappeared when I manually set the output format to pe-i386. The symbols in the object file are by standard named after the file name, e.g. converting the file inbuilt_training_data.bin would result in these two symbols: binary_inbuilt_training_data_bin_start and binary_inbuilt_training_data_bin_end. I found some tutorials on the web which claim that these symbols should be declared as extern char _binary_inbuilt_training_data_bin_start;, but this does not seem to be right -- only extern char binary_inbuilt_training_data_bin_start; worked for me.

    Read the article

  • nginx projects in subfolders

    - by Timothy
    I'm getting frustrated with my nginx configuration and so I'm asking for help in writing my config file to serve multiple projects from sub-directories in the same root. This isn't virtual hosting as they are all using the same host value. Perhaps an example will clarify my attempt: request 192.168.1.1/ should serve index.php from /var/www/public/ request 192.168.1.1/wiki/ should serve index.php from /var/www/wiki/public/ request 192.168.1.1/blog/ should serve index.php from /var/www/blog/public/ These projects are using PHP and use fastcgi. My current configuration is very minimal. server { listen 80 default; server_name localhost; access_log /var/log/nginx/localhost.access.log; root /var/www; index index.php index.html; location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www$fastcgi_script_name; include fastcgi_params; } } I've tried various things with alias and rewrite but was not able to get things set correctly for fastcgi. It seems there should be a more eloquent way than writing location blocks and duplicating root, index, SCRIPT_FILENAME, etc. Any pointers to get me headed in the right direction are appreciated.

    Read the article

  • Wordpress - related posts by custom taxonomy problem

    - by Nordin
    Hello, I'm trying to display related posts based on a custum taxonomy. I found a query at wordpress.org that kind of works. However the original post gets duplicated in the results multiple times. (words is the name of the custom taxonomy I use) What seems to happen is that the single post gets duplicated according to what amount showpost is set. Any idea's what could cause this? The code: <?php //for in the loop, display all "content", regardless of post_type, //that have the same custom taxonomy (e.g. words) terms as the current post $backup = $post; // backup the current object $found_none = '<h2>No related posts found!</h2>'; $taxonomy = 'words';// e.g. post_tag, category, custom taxonomy $param_type = 'words'; // e.g. tag__in, category__in, but genre__in will NOT work $post_types = get_post_types( array('public' => true), 'names' ); $tax_args=array('orderby' => 'none'); $tags = wp_get_post_terms( $post->ID , $taxonomy, $tax_args); if ($tags) { foreach ($tags as $tag) { $args=array( "$param_type" => $tag->slug, 'post__not_in' => array($post->ID), 'post_type' => $post_types, 'showposts'=>5, 'caller_get_posts'=>1 ); $my_query = null; $my_query = new WP_Query($args); if( $my_query->have_posts() ) { while ($my_query->have_posts()) : $my_query->the_post(); ?> <h3><a href="<?php the_permalink() ?>" rel="bookmark" title="<?php the_title(); ?>"><?php the_title(); ?></a></h3> <?php $found_none = ''; endwhile; } } } if ($found_none) { echo $found_none; } $post = $backup; // copy it back wp_reset_query(); // to use the original query again ?>

    Read the article

  • Debugging T4 Template in VS 2010 Crashes IDE

    - by Eric J.
    I'm trying to debug a slightly-modified version of the ADO.NET POCO Entity Generator template using the directions Oleg Sych published a few years back. I modified the DbgJITDebugLaunchSetting key as recommended. I get a dialog indicating that a user-defined breakpoint has been hit. However, rather than being presented with the option to start a new instance of VS 2010 to debug, the original instance of VS 2010 just crashes and auto-restarts. Is it possible to debug T4 templates with VS 2010?

    Read the article

  • IE Automation - How to Trap 'NewProcess' event

    - by dpb
    In Environment - Standard User, Win7x64, IE8 on opening Unprotected URL, IE 8 will first start a tab with low integrity and the swap out this tab with another tab of medium integrity. This is done behind the scene and the original IWebBrowser2 pointer is lost. I want to catch the 'NewProcess' Event which will get generated during this swap out, please help me how to go about this. Sample code can help me, me using C++ Ref - http://blogs.msdn.com/b/ieinternals/archive/2011/08/03/internet-explorer-automation-protected-mode-lcie-default-integrity-level-medium.aspx Thank You

    Read the article

  • Writing .htaccess mod rewrite for hierarchical categories

    - by NetCaster
    i need to rewrite urls for my classified ads directory i have 4 types of links /City == display all ads in city /City/Cat1 == display all ads in city + category /City/Cat1/Cat2 == display add ads in city + category 1 + category 2 /City/Cat1/Cat2/Ad-id == display the ad itself and pass cat1 cat2 and city variables original hidden url should be index.php?city=alexandria&cat1=cars&cat2=bikes&adid=EWSw22d Can you please help me writing .htaccess for this structure

    Read the article

  • Edge detection using wavelet

    - by cheoma
    I had done Edge detection using wavelet transform using thus steps changing the image to Gray scale decomposing the image using dwt2(discrete wavelet transform,Haar wavelet filter ) in to horizontal,vertical,diagonal and approximation(detail) further decomposing the horizontal part threshold (global threshold like canny Edge detection ) i got the edge but i got a problem while locating the edge to complete image to mean recovering original image using only the Edges so i need help concerning this either in concept ,mat lab code or references I hope i will get your help soon

    Read the article

  • Testing Rails Metal With Cucumber/rSpec

    - by nkabbara
    Hi, I'm trying to stub a third party service that my metal talks to. It seems rspec mocks/stubs don't extend all the way to the Metal. When I call stubbed methods on objects, it calls the original one and not the stubbed one. Any idea of how I can have rSpec doubles extend all the way to the metal? Thanks. -Nash

    Read the article

  • Performance Optimization for Matrix Rotation

    - by Summer_More_More_Tea
    Hello everyone: I'm now trapped by a performance optimization lab in the book "Computer System from a Programmer's Perspective" described as following: In a N*N matrix M, where N is multiple of 32, the rotate operation can be represented as: Transpose: interchange elements M(i,j) and M(j,i) Exchange rows: Row i is exchanged with row N-1-i A example for matrix rotation(N is 3 instead of 32 for simplicity): ------- ------- |1|2|3| |3|6|9| ------- ------- |4|5|6| after rotate is |2|5|8| ------- ------- |7|8|9| |1|4|7| ------- ------- A naive implementation is: #define RIDX(i,j,n) ((i)*(n)+(j)) void naive_rotate(int dim, pixel *src, pixel *dst) { int i, j; for (i = 0; i < dim; i++) for (j = 0; j < dim; j++) dst[RIDX(dim-1-j, i, dim)] = src[RIDX(i, j, dim)]; } I come up with an idea by inner-loop-unroll. The result is: Code Version Speed Up original 1x unrolled by 2 1.33x unrolled by 4 1.33x unrolled by 8 1.55x unrolled by 16 1.67x unrolled by 32 1.61x I also get a code snippet from pastebin.com that seems can solve this problem: void rotate(int dim, pixel *src, pixel *dst) { int stride = 32; int count = dim >> 5; src += dim - 1; int a1 = count; do { int a2 = dim; do { int a3 = stride; do { *dst++ = *src; src += dim; } while(--a3); src -= dim * stride + 1; dst += dim - stride; } while(--a2); src += dim * (stride + 1); dst -= dim * dim - stride; } while(--a1); } After carefully read the code, I think main idea of this solution is treat 32 rows as a data zone, and perform the rotating operation respectively. Speed up of this version is 1.85x, overwhelming all the loop-unroll version. Here are the questions: In the inner-loop-unroll version, why does increment slow down if the unrolling factor increase, especially change the unrolling factor from 8 to 16, which does not effect the same when switch from 4 to 8? Does the result have some relationship with depth of the CPU pipeline? If the answer is yes, could the degrade of increment reflect pipeline length? What is the probable reason for the optimization of data-zone version? It seems that there is no too much essential difference from the original naive version. EDIT: My test environment is Intel Centrino Duo processor and the verion of gcc is 4.4 Any advice will be highly appreciated! Kind regards!

    Read the article

  • Exception opening TAdoDataset: Arguments are of the wrong type, are out of acceptable range, or are

    - by Dave Falkner
    I've been trying to debug the following problem for several weeks now - this method is called from several places within the same datamodule, but this exception (from the subject line of this post) only occurs when integers for a certain purpose (pickup orders vs. orders that we ship through a carrier) are used - and don't ask me how the application can tell the difference between one integer's purpose and another! Furthermore, I cannot duplicate this issue on my machine - the error occurs on a warehouse machine but not my own development machine, even when working with the same production database. I have suspected an MDAC version conflict between the two machines, but have run a version checker and confirmed that both machines are running 2.8, and additionally have confirmed this by logging the TAdoDataset's .Version property at runtime. function TdmESShip.SecondaryID(const PrimaryID : Integer ): String; begin try with qESPackage2 do begin if Active then Close; LogMessage('-----------------------------------'); LogMessage('Version: ' + FConnection.Version); LogMessage('DB Info: ' + FConnection.Properties['Initial Catalog'].Value + ' ' + FConnection.Properties['Data Source'].Value); LogMessage('Setting the parameter.'); Parameters.ParamByName('ParameterName').Value := PrimaryID; LogMessage('Done setting the parameter.'); Open; Ninety-nine times out of 100 this logging code logs a successful operation as follows: Version: 2.8 DB Info: (database name and instance) Setting the parameter. Done setting the parameter. Opened the dataset. But then whenever a "pickup" order is processed, this exception gets thrown whenever the dataset is opened: Version: 2.8 DB Info: (database name and instance) Setting the parameter. Done setting the parameter. GetESPackageID() threw an exception. Type: EOleException, Message: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another Error: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another for packageID 10813711 I've tried eliminating the parameter and have built the commandtext for this dataset programmatically, suspecting that some part of the TParameter's configuration might be out of whack, but the same error occurs under the same circumstances. I've tried every combination of TParameter properties that I can think of - this is the millionth TParameter I've created for my millionth dataset, and I've never encountered this error. I've even created a second dataset from scratch and removed all references to the original dataset in case some property of the original dataset in the .dfm might be corrupted, but the same error occurs under the same circumstances. The commandtext for this dataset is a simple select ValueA from TableName where ValueB = @ParameterB I'm about ready to do something extreme, such as writing a web service to look these values up - it feels right now as though I could destroy my machine, rebuild it, rewrite this entire application from scratch, and the application would still know to throw an exception whenever I try to look up a secondary value from a primary value, but only for pickup orders, and only from the one machine in the warehouse, but I'm probably missing something simple. So, any help anyone could provide would be greatly appreciated.

    Read the article

  • Who owns a php exec tar extracted file?

    - by Ken
    As far as file permissions are concerned, when you use a php script to unzip a tar file, who is the "owner" user of the files created? I'm wondering if its my ftp user because I uploaded the script file? Or does apache own the file? I know their are flags to be set to preserve the original permissions which I don't want (files where created and archived by someone else). I want my user to be the creater/owner of the files.

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >