Search Results

Search found 21301 results on 853 pages for 'duplicate values'.

Page 514/853 | < Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >

  • can I make Excel always open a delimited text file with "text" translation?

    - by khedron
    Hi there, Opening a tab-delimited data file in Excel to view & manipulate the data is a very common operation around here. However, by default Excel (2003/4 or 2007/8) will read the columns in a "General" format, which occasionally does terrible things like turning "1/2" into "2-Jan". Is there a way to tell Excel never to do this, but always process the values as Text, without going through the format wizard, selecting all of the columns, and doing it manually? Extra points if this works in both Mac and Windows versions of Excel.

    Read the article

  • 3d point cloud render from x,y,z 2d array with texture

    - by user1733628
    Need some direction on 3d point cloud display using OpenGL in c++ (vs2008). I am brand new to OpenGL and trying to do a 3d point cloud display with a texture. I have 3 2D arrays (each same size 1024x512) representing x,y,z of each point. I think I am on the right track with glBegin(GL_POLYGON); for(int i=0; i<1024; i++) { for(int j=0; j<512; j++) { glVertex3f(x[i][j], y[i][j], z[i][j]); } } glEnd(); Now this loads all the vertices in the buffer (I think) but from here I am not sure how to proceed. Or I am completely wrong here. Then I have another 2D array (same size) that contains color data (values from 0-255) that I want to use as texture on the 3D point cloud and display. I understand that this maybe a very basic OpenGL implementation for some but for me this is a huge learning curve. So any pointers, nudge or kick in the right direction will be appreciated.

    Read the article

  • Designing object oriented programming

    - by Pota Onasys
    Basically, I want to make api calls using an SDK I am writing. I have the following classes: Car CarData (stores input values needed to create a car like model, make, etc) Basically to create a car I do the following: [Car carWithData: cardata onSuccess: successHandler onError: errorHandler] that basically is a factory method that creates instance of Car after making an API call request and populating the new Car class with the response and passes that instance to the successHandler. So "Car" has the above static method to create that car, but also has non-static methods to edit, delete cars (which would make edit, delete API calls to the server) So when the Car create static method passes a new car to the successHandler by doing the following: successHandler([[Car alloc] initWithDictionary: dictionary) The success handler can go ahead and use that new car to do the following: [car update: cardata] [car delete] considering the new car object now has an ID for each car that it can pass to the update and delete API calls. My questions: Do I need a cardata object to store user inputs or can I store them in the car object that would also later store the response from all of the api calls? How can I improve this model? With regards to CarData, note that there might be different inputs for the different API calls. So create function might need to know model, make, etc, but find function might need to know the number of items to find, the limit, the start id, etc.

    Read the article

  • Activity Stream

    [Do you tweet? Follow us on Twitter @matthawley and @codeplex] We deployed a new version of the CodePlex website yesterday.  Redesigned Home Page with Activity Stream In CodePlex we continuously look for ways to provide our users with the most recent and relevant information they are seeking. It is with this in mind that we released our latest feature, the home page activity stream. The activity stream showcases events taking place on projects you are a part of as well as projects you are following. There are many different events in the system that causes activities to be created, including starting a discussion, creating a work item etc.   All the functionality that was available on the former home page, such as creating a new project or finding a project that needs help, is available on the right side of the new home page.     The CodePlex team values your feedback. We are frequently monitoring Twitter, our Discussions, and Issue Tracker. If you have not visited the Issue Tracker recently, please take a few minutes to suggest or vote on a feature you would like to see implemented.

    Read the article

  • Quick Replace in Visual Studio 2010 fails to use Tagged Expression n

    - by slomojo
    I'm trying to do some basic regex Quick Replace operations in Visual Studio 2010, but when I use regex grouping I don't get Tagged Expressions (ie. \1 \2 etc.) returning their values, instead they are blank. For example: Text int a = int.Parse("10"); int b = int.Parse("20"); int c = int.Parse("30"); Search Pattern (regex enabled) int\.Parse\("([0-9]*)"\); Replace \1; Replaced Text int a = ; int b = ; int c = ;

    Read the article

  • Should static parameters in an API be part of each method?

    - by jschoen
    I am currently creating a library that is a wrapper for an online API. The obvious end goal is to make it as easy for others to use as possible. As such I am trying to determine the best approach when it comes to common parameters for the API. In my current situation there are 3 (consumer key, consumer secret, and and authorization token). They are essentially needed in every API call. My question is should I make these 3 parameters required for each method or is there a better way. I see my current options as being: Place the parameters in each method call public ApiObject callMethod(String consumerKey, String consumerSecret, String token, ...) This one seems reasonable, but seems awfully repetitive to me. Create a singleton class that the user must initialize before calling any api methods. This seems wrong, and would essentially limit them to accessing one account at a time via the API (which may be reasonable, I dunno). Make them place them in a properties file in their project. That way I can load the properties that way and store them. This seems similar to the singleton to me, but they would not have to explicitly call something to initialize these values. Is there another option I am not seeing, or a more common practice in this situation that I should be following?

    Read the article

  • Simple C: How do I scan this information in properly?

    - by Doc
    OK this is a simple question but for some reason I just can't get it right. I have to scan from a file hundreds of lines of code and store it in a array (which I can normally do a ok job with) however At one point the code will specify a number that then corresponds to the next batch of chars ints and floats going into various arrays. As I know I am not describing this correctly here is a example. one line of the file I am reading will contain something close to this. 0221 T 2 S P 850 150 0.90 0.75 500 24 2 2012 G A 7 9600.00 0.1 1000 Name_of_place 0104 L 1 F 400 1.00 0.75 500 24 2 2012 G A 7 9600.00 0.1 1000 Ballroom the problem I am having is This part here 0221 T 2 S P 850 150 0.90 0.75 500 24 2 2012 G A 7 9600.00 0.1 1000 Name_of_place 0104 L 1 F 400 1.00 0.75 500 24 2 2012 G A 7 9600.00 0.1 1000 Ballroom The rest after this is Generally the exact same however at this point the number at the front descides all the values that are going in. I am almost completely lost on how to write a way that can scan this and store the data into arrays correctly

    Read the article

  • how to store and retrieve/generate UI?

    - by thindery
    I'm working on a site that will have hundreds, and eventually thousands, of paper products that users can customize online. Here is a very simple sample of what needs to be generated based on the product id: demo. This is a very simple version. I plan on replacing text fields with prettier elements(like the slider on tab 3). I imagine most of this can be achieved via jquery. So basically a product will have multiple pages(tabs), with multiple form elements on each page. I've never done a large scale project like this before and I am looking for ideas/suggestions for how I can store the info for each product that needs to be generated to create the UI. For each product, I need to store how many pages there are, what form fields are on each page, and the order of the fields on the page. As well as setting default text values and form options(font size, etc). Then with all this info stored somewhere, I can have the web app retrieve it and generate the UI with text fields, sliders, and other jquery-ish form enhancements, for that particular product. Can anyone toss out some suggestions, links, blogs, tutorials? I'm not really sure where to begin with this or what I need to start to investigate. I have experience with php, mysql, javascript, jquery, html, css, and that is really about it. I'm open to learning(and would enjoy exploring) new frameworks, programming, etc that will really get this web app working correctly, efficiently, and effectively. Maybe I should start looking into a mvc framework? like i said, i really have no idea what is the best approach. please let me know your suggestions!

    Read the article

  • Linux, GNU GCC, ld, version scripts and the ELF binary format -- How does it work? [closed]

    - by themoondothshine
    I'm trying to learn more about library versioning in Linux and how to put it all to work. Here's the context: I have two versions of a dynamic library which expose the same set of interfaces, say libsome1.so and libsome2.so. An application is linked against libsome1.so. This application uses libdl.so to dynamically load another module, say libmagic.so. Now libmagic.so is linked against libsome2.so. Obviously, without using linker scripts to hide symbols in libmagic.so, at run-time all calls to interfaces in libsome2.so are resolved to libsome1.so. This can be confirmed by checking the value returned by libVersion() against the value of the macro LIB_VERSION. So I try next to compile and link libmagic.so with a linker script which hides all symbols except 3 which are defined in libmagic.so and are exported by it. This works... Or at least libVersion() and LIB_VERSION values match (and it reports version 2 not 1). However, when some data structures are serialized to disk, I noticed some corruption. In the application's directory if I delete libsome1.so and create a soft link in its place to point to libsome2.so, everything works as expected and the same corruption does not happen. I can't help but think that this may be caused due to some conflict in the run-time linker's resolution of symbols. I've tried many things, like trying to link libsome2.so so that all symbols are alised to symbol@@VER_2 (which I am still confused about because the command nm -CD libsome2.so still lists symbols as symbol and not symbol@@VER_2), but nothing seems to work. What am I doing wrong?

    Read the article

  • How to increase the disk cache of Windows 7

    - by Mark Christiaens
    Under Windows 7 (64 bit), I'm reading through 9000 moderately sized files. In total, there is more than 200 MB of data. Using Java (JDK 1.6.21) I'm iterating over the files. The first 1400 or so go at full speed but then speed drops off to 4ms per file. It turns out that the main cost is incurred simply by opening the files. I'm opening the files using new FileInputStream (and of course closing them in time to avoid file leaks). After some investigating, I see that Windows' disk cache is using only 100 MB or so of RAM although I have 8 GiB available. I've tried increasing the cache size using the CacheSet tool but any values I provide are considered out of range. I've also tried enabling the LargeSystemCache registry key but (after rebooting) the CacheSet tool still indicates I'm using 100 MB of cache (and doesn't increase during the test run). Does anybody have any suggestions to "encourage" Windows 7 to cache my 9000 files?

    Read the article

  • I try to access a NFS mount via FTP. It works but the FTP Dir listing is very slow

    - by W0bble
    I mount an NFS using this command: mount -o rsize=8192,wsize=8192,timeo=14,intr serverip:/directory /mnt/directory However the mount appears on the client as expected a cmd like "ls -a" work pretty fast on the nfs mount. But when I try to list the mounted directory via FTP it gets very very slow ( 1.250 bytes in 160,39s (0,01KB/s) ). But surprisingly downloading files via FTP from nfs work with normal speeds. I tested several values for rsize and wsize parameter with no success. Both client and server are running Debian squeeze and NFSv4

    Read the article

  • Mouse pointer size inconsistent

    - by charon00
    Since installing Ubuntu 12.04, I've been having a problem with the mouse pointer size. On the desktop, it is quite a bit larger than it should be (24), though the different cursors (editing text, hyperlink hand, etc) are correct. The size changes to the correct size when the pointer is over some application windows (GVim, Netbeans, Firefox), but then changes back once it is moved out of the window. There was a similar question here, but the Xdefaults solution did not work for me, and I didn't want to try the one requiring editing the icon image. In addition, I've tried changing the cursor theme using sudo update-alternatives --config x-cursor-theme as well as using the dconf-editor, but though I can change the theme, the size issue remains. In case it's relevant, I'm running on a dual-screen setup with monitor sizes of 2560x1600 and 1920x1080, using the NVidia video driver. Is there another way to control pointer size, or a setting that might be messing it up? EDIT: These are the values/options I have for update-alternatives and in dconf-editor. I'm now wondering if Netbeans and Firefox are making the mouse pointer smaller than it should be, but I'm not sure how big 24 should be... update-alternatives: Selection Path Priority Status ------------------------------------------------------------ 0 /usr/share/icons/DMZ-White/cursor.theme 90 auto mode 1 /etc/X11/cursors/core.theme 30 manual mode 2 /etc/X11/cursors/handhelds.theme 20 manual mode 3 /etc/X11/cursors/redglass.theme 20 manual mode 4 /etc/X11/cursors/whiteglass.theme 20 manual mode * 5 /usr/share/icons/DMZ-Black/cursor.theme 30 manual mode 6 /usr/share/icons/DMZ-White/cursor.theme 90 manual mode dconf-editor: I can't post the image since I'm a new user but the cursor-size is set to 24 and the cursor-theme is DMZ-Black.

    Read the article

  • A* PathFinding Not Consistent

    - by RedShft
    I just started trying to implement a basic A* algorithm in my 2D tile based game. All of the nodes are tiles on the map, represented by a struct. I believe I understand A* on paper, as I've gone through some pseudo code, but I'm running into problems with the actual implementation. I've double and tripled checked my node graph, and it is correct, so I believe the issue to be with my algorithm. This issue is, that with the enemy still, and the player moving around, the path finding function will write "No Path" an astounding amount of times and only every so often write "Path Found". Which seems like its inconsistent. This is the node struct for reference: struct Node { bool walkable; //Whether this node is blocked or open vect2 position; //The tile's position on the map in pixels int xIndex, yIndex; //The index values of the tile in the array Node*[4] connections; //An array of pointers to nodes this current node connects to Node* parent; int gScore; int hScore; int fScore; } Here is the rest: http://pastebin.com/cCHfqKTY This is my first attempt at A* so any help would be greatly appreciated.

    Read the article

  • How to make other semantics behave like SV_Position?

    - by object
    I'm having a lot of trouble with shadow mapping, and I believe I've found the problem. When passing vectors from the vertex shader to the pixel shader, does the hardware automatically change any of the values based on the semantic? I've compiled a barebones pair of shaders which should illustrate the problem. Vertex shader : struct Vertex { float3 position : POSITION; }; struct Pixel { float4 position : SV_Position; float4 light_position : POSITION; }; cbuffer Matrices { matrix projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), projection); output.light_position = output.position; // We simply pass the same vector in screenspace through different semantics. return output; } And a simple pixel shader to go along with it: struct Pixel { float4 position : SV_Position; float4 light_position : POSITION; }; float4 RenderPixelShader(Pixel input) : SV_Target { // At this point, (input.position.z / input.position.w) is a normal depth value. // However, (input.light_position.z / input.light_position.w) is 0.999f or similar. // If the primitive is touching the near plane, it very quickly goes to 0. return (0.0f).rrrr; } How is it possible to make the hardware treat light_position in the same way which position is being treated between the vertex and pixel shaders? EDIT: Aha! (input.position.z) without dividing by W is the same as (input.light_position.z / input.light_position.w). Not sure why this is.

    Read the article

  • How to setup my texture cordinates correctly in GLSL 150 and OpenGL 3.3?

    - by RubyKing
    I'm trying to do texture mapping in GLSL 150 and OpenGL 3.3 Here are my shaders I've tried my best to get this correct as possible hopefully this is :) I'm guessing you want to know what the problem is well my texture shows but not in its fullest form just one section of it not the full texture on the quad. All I can think of is its the texture cordinates in the main.cpp which is at the bottom of this post. FRAGMENT SHADER #version 150 in vec2 Texcoord_VSPS; out vec4 color; // Values that stay constant for the whole mesh. uniform sampler2D myTextureSampler; //Main Entry Point void main() { // Output color = color of the texture at the specified UV color = texture2D( myTextureSampler, Texcoord_VSPS ); } VERTEX SHADER #version 150 //Position Container in vec3 position; //Container for TexCoords attribute vec2 Texcoord0; out vec2 Texcoord_VSPS; //out vec2 ex_texcoord; //TO USE A DIFFERENT COORDINATE SYSTEM JUST MULTIPLY THE MATRIX YOU WANT //Main Entry Point void main() { //Translations and w Cordinates stuff gl_Position = vec4(position.xyz, 1.0); Texcoord_VSPS = Texcoord0; } LINK TO MAIN.CPP http://pastebin.com/t7Vg9L0k

    Read the article

  • PHP errors not being displayed

    - by Mike
    I'm using PHP with Apache on Ubuntu 12.10. Errors are not being displayed to the browser for some reason and I can't figure it out. I have the following in my php.ini file: error_reporting = E_ALL & ~E_DEPRECATED display_errors = On display_startup_errors = On log_errors = On I am also positive that I have edited the correct ini file by verifying it with php_ini_loaded_file(). I can also verify that the values are correctly set by doing the following in my script: echo ini_get("display_errors"); // Outputs 1 echo ini_get("display_startup_errors"); // Outputs 1 echo ini_get("log_errors"); // Outputs 1 echo ini_get("error_reporting"); // Outputs -1 I have tried what seems like every possible combination of these settings (and restarting Apache after each change) and it is just not outputting errors. I am also not using ini_set anywhere in the script. It is being set only from the ini file. Any ideas why errors aren't being displayed?

    Read the article

  • MVVM and service pattern

    - by alfa-alfa
    I'm building a WPF application using the MVVM pattern. Right now, my viewmodels calls the service layer to retrieve models (how is not relevant to the viewmodel) and convert them to viewmodels. I'm using constructor injection to pass the service required to the viewmodel. It's easily testable and works well for viewmodels with few dependencies, but as soon as I try to create viewModels for complex models, I have a constructor with a LOT of services injected in it (one to retrieve each dependencies and a list of all available values to bind to an itemsSource for example). I'm wondering how to handle multiple services like that and still have a viewmodel that I can unit test easily. I'm thinking of a few solutions: Creating a services singleton (IServices) containing all the available services as interfaces. Example: Services.Current.XXXService.Retrieve(), Services.Current.YYYService.Retrieve(). That way, I don't have a huge constructor with a ton of services parameters in them. Creating a facade for the services used by the viewModel and passing this object in the ctor of my viewmodel. But then, I'll have to create a facade for each of my complexe viewmodels, and it might be a bit much... What do you think is the "right" way to implement this kind of architecture ?

    Read the article

  • netgear GS108TV2 RSTP configuration

    - by jhowland
    I have a large set of GS108TV2 units--my goal is to set up a network which is comprised of several loops for redundancy/fault tolerance. I have a minimal 3 switch loop configured, with RSTP enabled on two ports on each switch. I have my bridge max age set to 6, and my bridge forward delay set to 4, which are the minimum values allowed. Hello time is fixed at 2 seconds. The switches respond to a cable being removed from a socket, but it takes too long. I cannot get the switch to respond to a loss of connection on one of the redundant ports in less than 20 seconds. Is there any way to configure these switches to respond faster than 20 seconds? That is unacceptable for my application. thanks in advance for any help

    Read the article

  • Is there a feature in Nagios that allows Memory between checks?

    - by Kyle Brandt
    There are various instances where there are values I want to monitor with Nagios, and I don't care as much about the value itself, but rather how it compares to the previous value. For instance, I wrote one to check the fail counters in OpenVZ. In this case, I didn't care about the value that much, but rather I cared if the value increased. Another example might be switch ports, I would be most interested to get alerted about the change of state of a port (Although perhaps a trap would be better for this one). For my OpenVZ script, I used a temp file, but I am wondering if there is a better way? Maybe Nagios has some variables that plugins (check scripts) can access that are persistent across checks?

    Read the article

  • solr administration

    - by devrick0
    Does anyone have any notes for an sysadmin supporting solr? I'm looking for anything that might be useful for monitoring & metrics as well as troubleshooting. Some useful links I have found are: /solr/admin/stats.jsp and /solr/admin/analysis.jsp In the logs I have noticed, other than the query, "hits", "status" and "QTime" values. The documentation on what these mean is sparse at least based on the 100+ websites I have checked. QTime appears to be the query time response in milliseconds. Hits is some form of results but I'm not sure exactly what makes that up and I'm not sure about status. Typically I see status come back as "0" but I have seen other numbers such as "5", so my thoughts that it could be either HTTP status codes or a 0 or 1 (good or bad) methodology isn't accurate. All of the documentation I have come across is intended for developers. Any sysadmin-centric documentation would be a big help.

    Read the article

  • xenserver: xe command never returns?

    - by ethrbunny
    I'm trying to port a xen server 6.2 pool to a new IP address range. I've got three servers total: 2 currently at their new IP but no longer in the pool and one remaining. I'm trying to set IP address information on the two disconnected ones using the xe command and all of its variants. Oddly enough, it never returns with any values. xe host-list It just sits there until I ctrl-c it. The server is still awake and responding though. I can enter other commands (EG ifconfig) and they work fine. If I enter this same command on the remaining server in the pool it works ok. I've tried restarting the toolstack and even rebooting. No change. What am I doing wrong?

    Read the article

  • Storing a pass-by-reference parameter as a pointer - Bad practice?

    - by Karl Nicoll
    I recently came across the following pattern in an API I've been forced to use: class SomeObject { public: // Constructor. SomeObject(bool copy = false); // Set a value. void SetValue(const ComplexType &value); private: bool m_copy; ComplexType *m_pComplexType; ComplexType m_complexType; }; // ------------------------------------------------------------ SomeObject::SomeObject(bool copy) : m_copy(copy) { } // ------------------------------------------------------------ void SomeObject::SetValue(const ComplexType &value) { if (m_copy) m_complexType.assign(value); else m_pComplexType = const_cast<ComplexType *>(&value); } The background behind this pattern is that it is used to hold data prior to it being encoded and sent to a TCP socket. The copy weirdness is designed to make the class SomeObject efficient by only holding a pointer to the object until it needs to be encoded, but also provide the option to copy values if the lifetime of the SomeObject exceeds the lifetime of a ComplexType. However, consider the following: SomeObject SomeFunction() { ComplexType complexTypeInstance(1); // Create an instance of ComplexType. SomeObject encodeHelper; encodeHelper.SetValue(complexTypeInstance); // Okay. return encodeHelper; // Uh oh! complexTypeInstance has been destroyed, and // now encoding will venture into the realm of undefined // behaviour! } I tripped over this because I used the default constructor, and this resulted in messages being encoded as blank (through a fluke of undefined behaviour). It took an absolute age to pinpoint the cause! Anyway, is this a standard pattern for something like this? Are there any advantages to doing it this way vs overloading the SetValue method to accept a pointer that I'm missing? Thanks!

    Read the article

  • Lookups targeting merged cells - only returning value for first row

    - by Ian
    I have a master worksheet which contains data that I wish to link to another 'summary' sheet using a lookup. However, some of the cells whose data I wish to include in the summary sheet are merged across two or more adjacent rows. To be clear, the 'primary' column A that I am using in my formula in order to identify the target row does not contain merged cells, but the column from which I wish to return a value does. I have tried VLOOKUP and INDEX+MATCH. The problem is that the data is only returned for the first row's key, and the others return zero (as though the cell in the target column were blank, where actually it is merged). I have tried inelegant ways around this, e.g. using IF statements to try to find the top row of the merged cell. However, these don't work well if the order of values in the summary sheet is different from that in the master sheet, as well as being messy. Can this be done?

    Read the article

  • model association or controller?

    - by andybritton
    I'm trying to create a rails app that allows users to submit information about their pets. I've come to a point where my knowledge is limited and I don't know enough about what/how this could be done so I'm hoping this will be relatively easy to answer. At the moment I have a model called Pet, this model currently stores basic information like name, picture etc but it also holds more specific data like type, breed, date of birth etc. What I would like to be able to do is create a page that can match various records without having to be manually categorized if that makes sense so a users pet could be matched to other pets with the same breed, age etc. I've read about nested models as I understand this information could be submitted to 2 models in one form but I am not sure whether this could be done directly in a separate controller which would only be visible to users with pets in these matched "groups" if that makes sense. So in essence is it best practice to use 1 table to store all the information and just use a controller to match pets based on rows having the same values or would it be far simpler to have a form with a nested model and link 2 tables together? The main feature needs to be matching without a user having to create a group or categorize pets so the second model would need to add id's to an array instead of just creating more and more rows.

    Read the article

  • Is hidden content (display: none;) -indexed- by search engines? [closed]

    - by user568458
    Possible Duplicate: How bad is it to use display: none in CSS? We've established on this site before (in this question) that, since there are so many legitimate uses for hiding content with display: none; when creating interactive features, that sites aren't automatically penalised for content that is hidden this way (so long as it doesn't look algorithmically spammy). Google's Webmaster guidelines also make clear that a good practice when using content that is initially legitimately hidden for interactivity purposes is to also include the same content in a <noscript> tag, and Google recommend that if you design and code for users including users with screen readers or javascript disabled, then 9 times out of 10 good relevant search rankings will follow (though their specific advice seems more written for cases where javascript writes new content to the page). JavaScript: Place the same content from the JavaScript in a tag. If you use this method, ensure the contents are exactly the same as what’s contained in the JavaScript, and that this content is shown to visitors who do not have JavaScript enabled in their browser. So, best practice seems pretty clear. What I can't find out is, however, the simple factual matter of whether hidden content is indexed by search engines (but with potential penalties if it looks 'spammy'), or, whether it is ignored, or, whether it is indexed but with a lower weighting (like <noscript> content is, apparently). (for bonus points it would be great to know if this varies or is consistent between display: none;, visibility: hidden;, etc, but that isn't crucial). This is different to the other questions on display:none; and SEO - those are about good and bad practice and the answers are discussions of good and bad practice, I'm interested simply in the factual 'Yes or no' question of whether search engines index, or ignore, content that is in display: none; - something those other questions' answers aren't totally clear on. One other question has an answer, "Yes", supported by a link to an article that doesn't really clear things up: it establishes that search engines can spot that text is hidden, it discusses (again) whether hidden text causes sites to be marked as spam, and ultimately concludes that in mid 2011, Google's policy on hidden text was evolving, and that they hadn't at that time started automatically penalising display:none; or marking it as spam. It's clear that display: none; isn't always spam and isn't always treated as spam (many Google sites use it...): but this doesn't clear up how, or if, it is indexed. What I will do will be to follow the guidelines and make sure that all the content that is initially hidden which regular users can explore using javascript-driven interactivity is also structured in way that noscript/screenreader users can use. So I'm not interested in best practice, opinions etc because best practice seems to be really clear: accessibility best practices boosts SEO. But I'd like to know what exactly will happen: whether any display: none; content I have alongside <noscript> or otherwise accessibility-optimised content will be be ignored, or indexed again, or picked up to compare against the <noscript> content but not indexed... etc.

    Read the article

< Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >