Search Results

Search found 1296 results on 52 pages for 'sam abraham'.

Page 10/52 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • What's the best practice for linking to/from guest posts on other blogs?

    - by sam
    When writing a guest blog for a site, I include a link back to my site - an inbound link. If I were to write a post on my blog publicising my guest post, that would mean there were reciprocal links, thus cancelling each other out, correct? If I made the link (on my blog) back to the guest post nofollow, would that cancel the effect, meaning I still get link juice from the guest post? Further down the line if the site I posted on as a guest wanted to write a post for my site, what is the best way for me to prevent re-introducing the reciprocal link problem?

    Read the article

  • Work Item Traceability in TFS 2010

    - by Sam Patrick
    I have created a Windows Form project (VS solution) under a TFS 2010 project. I may eventually add more solutions to the TFS project. My question: Can we create a Use Case WIT for a specific solution within a TFS project? Furthermore, is it possible to create a "traceability matrix" that starts at the Use Case level and goes down to the the code level (at least the namespace level) of that particular VS solution?

    Read the article

  • Advice on designing a robust program to handle a large library of meta-information & programs

    - by Sam Bryant
    So this might be overly vague, but here it is anyway I'm not really looking for a specific answer, but rather general design principles or direction towards resources that deal with problems like this. It's one of my first large-scale applications, and I would like to do it right. Brief Explanation My basic problem is that I have to write an application that handles a large library of meta-data, can easily modify the meta-data on-the-fly, is robust with respect to crashing, and is very efficient. (Sorta like the design parameters of iTunes, although sometimes iTunes performs more poorly than I would like). If you don't want to read the details, you can skip the rest Long Explanation Specifically I am writing a program that creates a library of image files and meta-data about these files. There is a list of tags that may or may not apply to each image. The program needs to be able to add new images, new tags, assign tags to images, and detect duplicate images, all while operating. The program contains an image Viewer which has tagging operations. The idea is that if a given image A is viewed while the library has tags T1, T2, and T3, then that image will have boolean flags for each of those tags (depending on whether the user tagged that image while it was open in the Viewer). However, prior to being viewed in the Viewer, image A would have no value for tags T1, T2, and T3. Instead it would have a "dirty" flag indicating that it is unknown whether or not A has these tags or not. The program can introduce new tags at any time (which would automatically set all images to "dirty" with respect to this new tag) This program must be fast. It must be easily able to pull up a list of images with or without a certain tag as well as images which are "dirty" with respect to a tag. It has to be crash-safe, in that if it suddenly crashes, all of the tagging information done in that session is not lost (though perhaps it's okay to loose some of it) Finally, it has to work with a lot of images (10,000) I am a fairly experienced programmer, but I have never tried to write a program with such demanding needs and I have never worked with databases. With respect to the meta-data storage, there seem to be a few design choices: Choice 1: Invidual meta-data vs centralized meta-data Individual Meta-Data: have a separate meta-data file for each image. This way, as soon as you change the meta-data for an image, it can be written to the hard disk, without having to rewrite the information for all of the other images. Centralized Meta-Data: Have a single file to hold the meta-data for every file. This would probably require meta-data writes in intervals as opposed to after every change. The benefit here is that you could keep a centralized list of all images with a given tag, ect, making the task of pulling up all images with a given tag very efficient

    Read the article

  • What is a better abstraction layer for D3D9 and OpenGL vertex data management?

    - by Sam Hocevar
    My rendering code has always been OpenGL. I now need to support a platform that does not have OpenGL, so I have to add an abstraction layer that wraps OpenGL and Direct3D 9. I will support Direct3D 11 later. TL;DR: the differences between OpenGL and Direct3D cause redundancy for the programmer, and the data layout feels flaky. For now, my API works a bit like this. This is how a shader is created: Shader *shader = Shader::Create( " ... GLSL vertex shader ... ", " ... GLSL pixel shader ... ", " ... HLSL vertex shader ... ", " ... HLSL pixel shader ... "); ShaderAttrib a1 = shader->GetAttribLocation("Point", VertexUsage::Position, 0); ShaderAttrib a2 = shader->GetAttribLocation("TexCoord", VertexUsage::TexCoord, 0); ShaderAttrib a3 = shader->GetAttribLocation("Data", VertexUsage::TexCoord, 1); ShaderUniform u1 = shader->GetUniformLocation("WorldMatrix"); ShaderUniform u2 = shader->GetUniformLocation("Zoom"); There is already a problem here: once a Direct3D shader is compiled, there is no way to query an input attribute by its name; apparently only the semantics stay meaningful. This is why GetAttribLocation has these extra arguments, which get hidden in ShaderAttrib. Now this is how I create a vertex declaration and two vertex buffers: VertexDeclaration *decl = VertexDeclaration::Create( VertexStream<vec3,vec2>(VertexUsage::Position, 0, VertexUsage::TexCoord, 0), VertexStream<vec4>(VertexUsage::TexCoord, 1)); VertexBuffer *vb1 = new VertexBuffer(NUM * (sizeof(vec3) + sizeof(vec2)); VertexBuffer *vb2 = new VertexBuffer(NUM * sizeof(vec4)); Another problem: the information VertexUsage::Position, 0 is totally useless to the OpenGL/GLSL backend because it does not care about semantics. Once the vertex buffers have been filled with or pointed at data, this is the rendering code: shader->Bind(); shader->SetUniform(u1, GetWorldMatrix()); shader->SetUniform(u2, blah); decl->Bind(); decl->SetStream(vb1, a1, a2); decl->SetStream(vb2, a3); decl->DrawPrimitives(VertexPrimitive::Triangle, NUM / 3); decl->Unbind(); shader->Unbind(); You see that decl is a bit more than just a D3D-like vertex declaration, it kinda takes care of rendering as well. Does this make sense at all? What would be a cleaner design? Or a good source of inspiration?

    Read the article

  • Juju didn't configure rabbitmq for openstack?

    - by SaM
    I have installed ubuntu Openstack HA with juju with all 24 servers. But my openstack is not working at all. On dashboard on every page I get errors saying "could not retrieve usage information", "could not retrieve volume information, "could not retrieve .....etc I spent hours and have discovered that juju has not done configuration correctly. I found that on cloud controller in nova.conf juju has added rabitmq vhost enrty, but that virtual host is not added in rabbitmq. Then how is its suppose to work? And on juju-gui canvas rabbimq is all green and is working fine, which in reality its not. I am really wondering if juju has really done correct configuration in all 24 servers now, I am getting the feeling that it would have been faster if I would have done openstack deployment manually instead of using juju. Why was the virtual host entry not added in rabbitmq? How should I solve this?

    Read the article

  • backlink anchor text / keyword stratergy post penguin

    - by sam
    Ive heard allot recently about over optimisation regarding backlink anchor text ofsite. What ive heard from seomoz was sites most effected by the penguin update had over 60% of their backlinks anchor text the same, so google saw this is unnatural and penalized them. Which kind of makes sense as its not normal to have such high density on one word / phrase. If i where building links for a gastro pub in london. (this is purely hypothetical). The sort of keywords i would go after are "gastro pub in london" and "london gastro pub" If i where to mix up the anchor text by having: gastro pub gastro pub in london london gastro pub would this be seen as ok ? or would these all be seen as broad match keywords and counted as one phrase, making me fall foul of the penguin update ?

    Read the article

  • Network really slow with TL-WN951N wireless card

    - by Sam
    I literally just installed ubuntu and it seems to be working great except the network is deadly slow. I'm running a TL-WN951N wireless card which can download at about 600-700 KB/s in windows but in Ubuntu the max speed it seems to get is around 5KB/s. I guess I should note that my WAP is only wireless-G but like I said, I can get much better speeds in Windows. I'm testing the speeds by downloading files from here: http://mirror.internode.on.net/pub/test/ Anyone have any idea? I saw some people recommend downloading drivers and compiling them myself but I'm really really new to all of this so would appreciate someone babying me through it so I don't brick my computer! Here are the results of lspci -v: 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02) Subsystem: Giga-byte Technology GA-EP45-DS5 Motherboard Flags: bus master, fast devsel, latency 0, IRQ 44 I/O ports at c000 [size=256] Memory at e9110000 (64-bit, prefetchable) [size=4K] Memory at e9100000 (64-bit, prefetchable) [size=64K] [virtual] Expansion ROM at e9120000 [disabled] [size=64K] Capabilities: <access denied> Kernel driver in use: r8169 Kernel modules: r8169 05:02.0 Network controller: Atheros Communications Inc. AR5008 Wireless Network Adapter (rev 01) Subsystem: Atheros Communications Inc. Device 3071 Flags: bus master, 66MHz, medium devsel, latency 168, IRQ 18 Memory at e9200000 (32-bit, non-prefetchable) [size=64K] Capabilities: <access denied> Kernel driver in use: ath9k Kernel modules: ath9k

    Read the article

  • Setting up page goals in Analytics when using progressive enhancement to load content using jquery .load

    - by sam
    I'm using jQuery .load to load content in from other pages into my homepage, so that Google can still see whats going on I've made the <a> tags go to the pages but over ride them in the JavaScript so instead of going the that page it just loads in the content from that page to the main page. Normaly I would just make the page /contact.html a goal. Can I still get it to work as a goal if the content is being loaded in? Can I do something like when the user clicks <a href="contact.html" id="load-contact">contact</a> it logs the clicking of the <a> tag as a goal, rather than the actaul page being visited?

    Read the article

  • The Basics of Project Management / Software Development

    - by Sam
    It suddenly struck me today that I have never developed any large application or worked with a team of programmers, and so am missing out a lot - both in terms of technical knowledge and the social-fun part of it. And I would like to rectify that - an idea is to start an open source group by training college students (for no charge) and developing some open source application with them. Please give me some basic advice on the whole process of how to (1) plan and (2) manage projects in a team. What new skill sets would you recommend? (I have read joel on software and 37 Signals, and got many insightful tips from them. But I'd like a little more technical knowledge ...) Background (freelancer, past 4+ years) - Computer engineer graphic / web designer online marketing moved on to programming in PHP, Perl, Python did Oracle DBA OCP training to understand DB's current self-assigned title - web application developer.

    Read the article

  • Meaningful concise method naming guidelines

    - by Sam
    Recently I started releasing an open source project, while I was the only user of the library I did not care about the names, but know I want to assign clever names to each methods to make it easier to learn, but I also need to use concise names so they are easy to write as well. I was thinking about some guidelines about the naming, I am aware of lots of guidelines that only care about letters casing or some simple notes. Here, I am looking after guidelines for meaningful concise naming. For example, this could be part of the guidelines I am looking after: Use Add when an existing item is going to be added to a target, Use Create when a new item is being created and added to a target. Use Remove when an existing item is going to be removed from a target, Use delete when an item is going to be removed permanently. Pair AddXXX methods with RemoveXXX and Pair CreateXXX methods with DeleteXXX methods, but do not mix them. The above guidance may be intuitive for native English speakers, but for me that English is my second language I need to be told about things like this.

    Read the article

  • Unindexing my tumblr blogs content and moving it to another tumblr blog

    - by sam
    ive been writing a tumblr blog for the past yr or so, ive writen about 300 articles, but now i need to move the blog to another site. (before it was running under blog.mysite.com and i now want it to run under blog.my*new*site.com) I want to keep the archived articles and have them on the new site, so what i was hoping to do was export the blog from tumblr, go into webmaster tools remove all the blogs indexed urls from google webmaster, then make a new tumblr blog and import the posts. Would google see this as new content as ive deleted their indexed copy ? Could i just move the mapping of the tumblr blog to the new subdomain, but in doing this i would lose all the pr and it would still look like duplicate content whats the best way to approach this ?

    Read the article

  • Do image backlinks count as backlinks?

    - by sam
    If i have lots of images appearing tumblr blogs, the sort of tumblogs with very little text just reams and reams of images for people to browse through (example - http://whereisthecool.com/). If my image is embeded in their site like this : <a href="http://mysite.com" target="blank"> <img src="cutecatblog.com/cat.jpg" alt="cute cat"/> </a> so the image was a link back to my site. Although there is no anchor text to speak of does google take into account the alt text of the image ? Would this still count in googles eyes as a backlink ?

    Read the article

  • adwords traffic shows as 'not set' in google analytics

    - by sam
    in google analytics if i go to traffic sources search paid all i get is "not set". instead of the usual list of keywords that people have used to find the ad, this makes it really difficult to understand whats going on in the campaign.. ive gone into adwords and turned on auto tagging but still the same problem any idea how i can fix this so under the paid tab i get the phrases people have used to find my ad in ppc ?

    Read the article

  • Unit test SHA256 wrapper queries

    - by Sam Leach
    I am just beginning to write unit tests. So please bear with me. I have the following SHA256 wrapper. public static string SHA256(string plainText) { StringBuilder sb = new StringBuilder(); SHA256CryptoServiceProvider provider = new SHA256CryptoServiceProvider(); var hashedBytes = provider.ComputeHash(Encoding.UTF8.GetBytes(plainText)); for (int i = 0; i < hashedBytes.Length; i++) { sb.Append(hashedBytes[i].ToString("x2").ToLower()); } return sb.ToString(); } Do I want to be testing it? If so, what do you recommend? My thought process is as follows: What logic is there here. The answer is my for loop and ToString("x2") so from my understanding I want to be testing this part? I can assume Encoding.UTF8.GetBytes(plainText) works. Correct assumption? I can assume SHA256CryptoServiceProvider.ComputeHash() works. Correct assumption? I want to be only testing my logic. In this case is limited to the printing of hex encoded hash. Correct? Thanks.

    Read the article

  • Suggestions for connecting .NET WPF GUI with Java SE Server

    - by Sam Goldberg
    BACKGROUND We are building a Java (SE) trading application which will be monitoring market data and sending trade messages based on the market data, and also on user defined configuration parameters. We are planning to provide the user with a thin client, built in .NET (WPF) for managing the parameters, controlling the server behavior, and viewing the current state of the trading. The client doesn't need real-time updates; it will instead update the view once every few seconds (or whatever interval is configured by the user). The client has about 6 different operations it needs to perform with the server, for example: CRUD with configuration parameters query subset of the data receive updates of current positions from server It is possible that most of the different operations (except for receiving data) are just different flavors of managing the configuration parameters, but it's too early in our analysis for us to be sure. To connect the client with the server, we have been considering using: SOAP Web Service RESTful service building a custom TCP/IP based API (text or xml) (least preferred - but we use this approach with other applications we have) As best as I understand, pros and cons of the different web service flavors are: SOAP pro: totally automated in .NET (and Java), modifying server side interface require no code changes in communication layer, just running refresh on Web Service reference to regenerate the classes. con: more overhead in the communication layer sending more text, etc. We're not using J2EE container so maybe doesn't work so well with J2SE REST pro: lighter weight, less data. Has good .NET and Java support. (I don't have any real experience with this, so don't know what other benefits it has.) con: client will not be automatically aware if there are any new operations or properties added (?), so communication layer needs to be updated by developer if server interface changes. con: (both approaches) Server cannot really push updates to the client at regular intervals (?) (However, we won't mind if client polls the server to get updates.) QUESTION What are your opinions on the above options or suggestions for other ways to connect the 2 parts? (Ideally, we don't want to put much work into the communication layer, because it's not the significant part of the application so the more off-the-shelf and automated the better.)

    Read the article

  • Where to find gtx560Ti driver for ubuntu 12.04?

    - by sam
    I found in this nvidia link : http://www.nvidia.com/object/linux-display-amd64-295.59-driver.html It says Linux x64 (AMD64/EM64T) Display Driver Version: 295.59 Certified Supported: NVS 5400M NVS 310 Quadro 410 GeForce GT 620M GeForce GT 640M GeForce GT 640M LE GeForce GT 650M GeForce GTX 660M GeForce GTX 670M GeForce GTX 675M GeForce GTX 555 GeForce GTX 560 SE GeForce GT 415 GeForce GTX 460 v2 Is that driver support for gtx560Ti? If not ,where is the proper driver? (I don't want to use the driver in ubuntu settings) Thank you~

    Read the article

  • Google doesn't always show rich snippets when the site uses structured data [duplicate]

    - by Sam Se
    This question is an exact duplicate of: Google Structured Data [on hold] 1 answer I'm so tired of the Google structured data recipe. After some days, it loses the image and the extra information. Then I test it again, and it shows again. Some other days in the future it might go away even if it is still showing in test tool. What i can do? I tried with RDFa and schema.org microdata.

    Read the article

  • 2 google analytics profiles for 2 sections of the same site

    - by sam
    Ive got a website which for the most part is a portfolio, there is another section of the site mysite.com/micro-site which ranks extremely well for it chosen term / topic, and brings in lots of traffic, but actually has little to do with the core business. It was really made as a piece of content - in the same way sites like this are - http://chrome.com/campaigns/rollit For the main site i use 1 Google analytics profile and set of tags, for the micro site i have a completely different analytics profile and set of tags. The main reason ive done this is because the traffic stats and insights for the micro site are essentially just noise, its nice to have the traffic but they dont help when reading analytics reports, so if they were combined my analytics reports would be a mess. Is there any disadvantage / negatives of doing this ?

    Read the article

  • Display Driver Issue on an hp TX2-1160ea Notebook

    - by Sam
    I'm new to Linux and recently switched to Ubuntu 11.04 due to my project requirement. My laptop has been freezing and going to black screen of death when I run anything related to display (Share desktop, stream video, etc). Today I went through the Ubuntu forum to install the appropriate graphic driver and, after doing it, I rebooted my PC. It gave an error before login saying "select the recovery mode" and after clicking OK, it didn't give the same error on reboot but I've lost the 11.04 graphical interface and all I see is the interface of Ubuntu v10 with slow visuals (even scrolling up/down on browser is really slow). For the reference, here's a desktop screenshot so that you can understand the situation. Also the laptop is overheating. How can I fix this problem? How can i get the Ubuntu 11.04 view back? I also tried Google, but couldn't find any issue like this. Some general Information: Laptop: HP TouchSmart TX2-1160ea Processor: AMD Turion TX2 Memory: 4GB OS: Ubuntu 11.04 Some debugging information:: $ report-hw | grep controller lspci -knn: 00:11.0 SATA controller [0106]: ATI Technologies Inc SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] [1002:4391] lspci -knn: 00:14.3 ISA bridge [0601]: ATI Technologies Inc SB7x0/SB8x0/SB9x0 LPC host controller [1002:439d] lspci -knn: 01:05.0 VGA compatible controller [0300]: ATI Technologies Inc RS780M/RS780MN [Radeon HD 3200 Graphics] [1002:9612] lspci -knn: 08:00.0 Network controller [0280]: Broadcom Corporation BCM4322 802.11a/b/g/n Wireless LAN Controller [14e4:432b] (rev 01) lspci -knn: 09:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 02) And: $ dpkg -l '*fglrx*' Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Description +++-==============-==============-============================================ ii fglrx 2:8.840-0ubunt Video driver for the ATI graphics accelerato ii fglrx-amdcccle 2:8.840-0ubunt Catalyst Control Center for the ATI graphics un fglrx-control <none> (no description available) un fglrx-control- <none> (no description available) ii fglrx-dev 2:8.840-0ubunt Video driver for the ATI graphics accelerato un fglrx-driver <none> (no description available) un fglrx-driver-d <none> (no description available) un fglrx-kernel-s <none> (no description available) un fglrx-modalias <none> (no description available) un xfree86-driver <none> (no description available) un xfree86-driver <none> (no description available) un xorg-driver-fg <none> (no description available) un xorg-driver-fg <none> (no description available) If you need any more information that could help, just ask.

    Read the article

  • Determining Cost of API Calls

    - by Sam
    [This is a cross-post originally posted by me in SO. I think the question is more appropriate here.] I was going through the adwords API and came across their rate sheet - http://code.google.com/apis/adwords/docs/ratesheet.html . They charge $0.25 per 1000 API units and under the 'Operation Costs' sections list the cost (in API units) of different API calls. I am curious - based on what factors do they (and others API developers) calculate the cost of an API call? Is there any simple formula or a standard way to determine this? Note: When I say 'cost' of an API call, I don't mean the money but the API units. For example, how do you determine one API call costs 100 'units' and another 1000?

    Read the article

  • Understanding CTR in Google Webmaster Tools

    - by sam
    I've got a site that's showing a 9% CTR for a phrase in Google Webmaster Tools, but the average position for my site is 14th (this includes 7 local results for this phrase). I was a little confused as to what the CTR actually meant, is it : for each person who searches for that phrase 9% of them click my site. or for each person who actually sees my site in the search results 9% of them click through (bearing in mind 14th is high on page 2 when the local listings are used).

    Read the article

  • Why won't title attributes get indexed in Google?

    - by Sam
    When I search for Ride On + my site's name, I see that it's indexed. But when I search for Green Horse + my site's name, I don't see my site appearing in the results anywhere! Here's my code: <td><a href="#" title="Green Horse Ride">Ride On</a></td> Does this mean that title attributes are not indexed/shown by Google at all? What is better to use, alt? What are the other alternatives except title and alt?

    Read the article

  • Changing screen saver settings without completely removing gnome-screensaver and installing xscreensaver

    - by Sam
    I am on 11.04 atm. Would like to figure out how to change the settings for screensavers without completely switching to xscreensaver. I read somewhere that it doesn't play as nicely with some of gnome's features like screen locking. I also read some guide that recommended installing xscreensaver and configuring your screen saver settings there and apparently they were supposed to be used by gnome-screensaver as well. But that did not work for me. gnome-screensaver sure sucks

    Read the article

  • Unity isn't starting on 13.10 (with Cinnamon 2.0 installed)

    - by Sam Pearman
    Since upgrading to 13.10, I can't log in to unity desktop. Light dm works correctly, but attempting to log in tries to start the session then drops back to light. I've already dropped to terminal (ctrl+alt+f2) and done this: sudo apt-get update sudo apt-get install --reinstall ubuntu-desktop sudo apt-get install unity Logging in as a guest session also fails. Logging in to other window managers works with varying degrees of success. Note: I have Cinnamon 2.0 installed from PPA. I'm using a 2 monitor setup. Also of note is that the session prior to my upgrade to 13.10 the background of unity failed to display at all, instead showing what was there in the screen buffer from the previous frame. The entire OS worked correctly otherwise though, so I just ignored it for the session. No other upgrades or even updates were done prior to this occurring. My upgrade path to 13.10 was basically this: Install 13.04 alongside Windows 7, use ubuntu as a glorified web browser for a while, get updates (in preparation for 13.10), install 13.10. I also used Unity Tweak Tool to change some aspects of unity, particularly auto-hide. Any help or ideas would be appreciated, as I'm typing this on my phone :(

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >