Search Results

Search found 21028 results on 842 pages for 'single player'.

Page 567/842 | < Previous Page | 563 564 565 566 567 568 569 570 571 572 573 574  | Next Page >

  • How to reference the same CodeBehind class from multiple .aspx files (and be able to pre-compile the

    - by thelsdj
    I have a set of aspx pages that each live in their own directory and reference a single aspx.cs code behind file. This has worked fine previously because I never tried to pre-compile the site. IIS must have individually compiled each aspx, linking them to the contents of App_Code but never referencing more than one aspx at a time. Now that I'm trying to pre-compile the website using Web Deployment Projects I keep getting errors about the same class being found in multiple assemblies. I can't just drop the aspx.cs in App_Code and subclass it because it wouldn't be able to find the controls on the .aspx pages when compiling. Maybe I could explicitly define every control on the page in the .cs? But would that allow them to be wired up correctly? Any other ideas on how I can reference the same Page class from multiple .aspx pages and be able to pre-compile the entire website?

    Read the article

  • Rounded Corners Image Change on Hover

    - by Sarfraz
    Hello, I created a rounded box/button and sliced its first corner, the middle bar (which repeats horizontally to adjust the width of the button text/content) and the last corner and used following markup: <div id="left-corner"></div> <div id="middle-bar">About Us</div> <div id="right-corner"></div> These divs have corresponding images from CSS and are floated left. Those three divs create a single rounded button wiht text About Us which is fine. Problem: I have also created similar three slices of hover images but I wonder how to apply hover to those buttons because if I use :hover with these hovered slices, then even hovering on corner images also creates hovering effect. One alternative is to use fixed width buttons and slice buttons completely but I do not want to do that.

    Read the article

  • Level of detail algorithm not functioning correctly

    - by Darestium
    I have been working on this problem for months; I have been creating Planet Generator of sorts, after more than 6 months of work I am no closer to finishing it then I was 4 months ago. My problem; The terrain does not subdivide in the correct locations properly, it almost seems as if there is a ghost camera next to me, and the quads subdivide based on the position of this "ghost camera". Here is a video of the broken program: http://www.youtube.com/watch?v=NF_pHeMOju8 The best example of the problem occurs around 0:36. For detail limiting, I am going for a chunked LOD approach, which subdivides the terrain based on how far you are away from it. I use a "depth table" to determine how many subdivisions should take place. void PQuad::construct_depth_table(float distance) { tree[0] = -1; for (int i = 1; i < MAX_DEPTH; i++) { tree[i] = distance; distance /= 2.0f; } } The chuncked LOD relies on the child/parent structure of quads, the depth is determined by a constant e.g: if the constant is 6, there are six levels of detail. The quads which should be drawn go through a distance test from the player to the centre of the quad. void PQuad::get_recursive(glm::vec3 player_pos, std::vector<PQuad*>& out_children) { for (size_t i = 0; i < children.size(); i++) { children[i].get_recursive(player_pos, out_children); } if (this->should_draw(player_pos) || this->depth == 0) { out_children.emplace_back(this); } } bool PQuad::should_draw(glm::vec3 player_position) { float distance = distance3(player_position, centre); if (distance < tree[depth]) { return true; } return false; } The root quad has four children which could be visualized like the following: [] [] [] [] Where each [] is a child. Each child has the same amount of children up until the detail limit, the quads which have are 6 iterations deep are leaf nodes, these nodes have no children. Each node has a corresponding Mesh, each Mesh structure has 16x16 Quad-shapes, each Mesh's Quad-shapes halves in size each detail level deeper - creating more detail. void PQuad::construct_children() { // Calculate the position of the Quad based on the parent's location calculate_position(); if (depth < (int)MAX_DEPTH) { children.reserve((int)NUM_OF_CHILDREN); for (int i = 0; i < (int)NUM_OF_CHILDREN; i++) { children.emplace_back(PQuad(this->face_direction, this->radius)); PQuad *child = &children.back(); child->set_depth(depth + 1); child->set_child_index(i); child->set_parent(this); child->construct_children(); } } else { leaf = true; } } The following function creates the vertices for each quad, I feel that it may play a role in the problem - I just can't determine what is causing the problem. void PQuad::construct_vertices(std::vector<glm::vec3> *vertices, std::vector<Color3> *colors) { vertices->reserve(quad_width * quad_height); for (int y = 0; y < quad_height; y++) { for (int x = 0; x < quad_width; x++) { switch (face_direction) { case YIncreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, quad_height - 1.0f, -(position.y + y * element_width))); break; case YDecreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, 0.0f, -(position.y + y * element_width))); break; case XIncreasing: vertices->emplace_back(glm::vec3(quad_width - 1.0f, position.y + y * element_width, -(position.x + x * element_width))); break; case XDecreasing: vertices->emplace_back(glm::vec3(0.0f, position.y + y * element_width, -(position.x + x * element_width))); break; case ZIncreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, position.y + y * element_width, 0.0f)); break; case ZDecreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, position.y + y * element_width, -(quad_width - 1.0f))); break; } // Position the bottom, right, front vertex of the cube from being (0,0,0) to (-16, -16, 16) (*vertices)[vertices->size() - 1] -= glm::vec3(quad_width / 2.0f, quad_width / 2.0f, -(quad_width / 2.0f)); colors->emplace_back(Color3(255.0f, 255.0f, 255.0f, false)); } } switch (face_direction) { case YIncreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, quad_height - 1.0f, -(position.y + quad_height / 2.0f)); break; case YDecreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, 0.0f, -(position.y + quad_height / 2.0f)); break; case XIncreasing: this->centre = glm::vec3(quad_width - 1.0f, position.y + quad_height / 2.0f, -(position.x + quad_width / 2.0f)); break; case XDecreasing: this->centre = glm::vec3(0.0f, position.y + quad_height / 2.0f, -(position.x + quad_width / 2.0f)); break; case ZIncreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, position.y + quad_height / 2.0f, 0.0f); break; case ZDecreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, position.y + quad_height / 2.0f, -(quad_height - 1.0f)); break; } this->centre -= glm::vec3(quad_width / 2.0f, quad_width / 2.0f, -(quad_width / 2.0f)); } Any help in discovering what is causing this "subdivding in the wrong place" would be greatly appreciated.

    Read the article

  • optimistic locking batch update

    - by Priit
    How to use optimistic locking with batch updates? I am using SimpleJdbcTemplate and for single row I can build update sql that increments version column value and includes version in WHERE clause. Unfortunately te result int[] updated = simpleJdbcTemplate.batchUpdate does not contain rowcounts when using oracle driver. All elements are -2 indicating unknown rowcount. Is there some other, more performant way of doing this than executing all updates individually? These batches contain an average of 5 items (only) but may be up to 250.

    Read the article

  • LINQ method chaining and granular error handling

    - by Clafou
    I have a method which can be written pretty neatly through method chaining: return viewer.ServerReport.GetParameters() .Single(p => p.Name == Convention.Ssrs.RegionParamName) .ValidValues .Select(v => v.Value); However I'd like to be able to do some checks at each point as I wish to provide helpful diagnostics information if any of the chained methods returns unexpected results. To achieve this, I need to break up all my chaining and follow each call with an if block. It makes the code a lot less readable. Ideally I'd like to be able to weave in some chained method calls which would allow me to handle unexpected outcomes at each point (e.g. throw a meaningful exception such as new ConventionException("The report contains no parameter") if the first method returns an empty collection). Can anyone suggest a simple way to achieve such a thing?

    Read the article

  • firefox displaying .swf with wrong dimensions

    - by John Leonard
    I can't figure this one out. I have a flex app with fixed dimensions (950 x 700). I'm compiling the swf and using the default wrapper that flex generates. in some instances of Firefox, the dimensions are wrong. On top of that, with different domains, I get the .swf rendered with different dimensions (both wrong). Firefox 3.6.3 on one computer works fine. On another, the dimensions are off. Swf example and if you don't see a clipped swf...here are the screenshots. Rendered incorrectly on one domain And rendered in another domain with different bad dimensions. I'm at a total loss. When I embed the .swf with swfobject, I get the same issue. In the example you see, there's nothing in the app except for a single label centered on the stage.

    Read the article

  • Jqm is not a function

    - by kris
    Hi all, I'm having some trouble with Jquery and JqModal, and I hope you are able to help, since I've been struggling for hours.. Having a single button element with an onclick action running my method "test" (shown below): $('#picture_form').jqm({ajax: '/test.php'}); $('#picture_form').jqmShow(); This will load the ajax content of test.php into my div element picture_form, shown using JqModal as its supposed to! Though when I close this window, and re-clicks the button I'm getting the error: $("#picture_form").jqm is not a function. As a solution I've tried to use the JqModal trigger function, and this leaves me able to open and close the JqModal windows as many times as I want to. Sadly I can only call the 'trigger' using test environment, in my production code I have to open the JqModal window using a function.. Does anyone have a clue why this 'bug' appears when calling the opening when using a function? Thanks in advance

    Read the article

  • ActiveRecord fundamentally incompatible with composite keys?

    - by stimms
    I have been attempting to use subsonic for a project on which I'm working. All was going quite well until I encountered a link table with a composite primary key. That is a key made up of the primary keys of the two tables it joins. Subsonic failed to recognize both keys which was problematic. I was going to adjust subsonic to support compound keys but I stopped and though "Maybe there is a reason for this". Normally active record relies on a single primary key field for every record, even in link tables. But is this necessary? Should I just give up on active record for this project or continue with my modifications?

    Read the article

  • Is it really wrong to version documents using CouchDB's default behaviour?

    - by Tomas Sedovic
    This is one of those "I know I shouldn't do this but it's oh so convenient." questions. Sorry about that. I plan to use CouchDB for storing a bunch of documents and keeping their entire revision history. CouchDB does the versioning automatically, but it is strongly discouraged for programmer's use: "You cannot rely on document revisions for any other purpose than concurrency control." From what I've found on the CouchDB wiki, the versions can get deleted either during compaction or during replication. As far as I can tell, Compaction must always be triggered manually and Replication occurs only when there's more than one database server. The question is: if I won't run compaction and will use only single database instance for my documents, can I just use CouchDB's document versioning and expect it to work? What other problems I might run into? E.g. does not running compaction hurt the performance or consume significantly more disk space (than if I did handle the versioning manually)?

    Read the article

  • Print out PDF with javascript [closed]

    - by Daniel Abrahamsson
    I have a need to print out multiple PDFs with the help of javascript. Is this even possible without rendering each PDF in a separate window and calling window.print()? Basically, I would like to be able to do something like print('my_pdf_url'). Edit After some searching, I have come to the conclusion that there are no other methods than the one I've described above. It is a far from perfect solution, but it works in simple cases. Edit I ended up merging the PDFs to a monster PDF on the server side and then send this single PDF to the user, who can then choose to print it out.

    Read the article

  • Purpose of Trigraph sequences in C++?

    - by Kirill V. Lyadvinsky
    According to C++'03 Standard 2.3/1: Before any other processing takes place, each occurrence of one of the following sequences of three characters (“trigraph sequences”) is replaced by the single character indicated in Table 1. ---------------------------------------------------------------------------- | trigraph | replacement | trigraph | replacement | trigraph | replacement | ---------------------------------------------------------------------------- | ??= | # | ??( | [ | ??< | { | | ??/ | \ | ??) | ] | ??> | } | | ??’ | ˆ | ??! | | | ??- | ˜ | ---------------------------------------------------------------------------- In real life that means that code printf( "What??!\n" ); will result in printing What| because ??! is a trigraph sequence that is replaced with the | character. My question is what purpose of using trigraphs? Is there any practical advantage of using trigraphs? UPD: In answers was mentioned that some European keyboards don't have all the punctuation characters, so non-US programmers have to use trigraphs in everyday life? UPD2: Visual Studio 2010 has trigraph support turned off by default.

    Read the article

  • Migrating from SQL Server to firebird: pro and cons

    - by user193655
    I am considering the migration for 4 reasons: 1) SQLSERVER installation is a nightmare, expecially for 1-user software (Even if typically I have 3-20 users, sometimes I sell my software to single users: it is incredible to have troubles installing the DB, while installing the applicatino means copying an exe...). (note my max installation is 100 users, but there is no an upper limit). Software installs in 10 seconds, SQLServer in 1 hour. Firebird installation is much easier. 2) SQLSERVER runs on windows server only 3) My customers have all the express edition 4) i am not using any advanced feature, I am now starting using filestream, but the main reason for this is that Express edition has 4/10GB db size limit So these are all Pros of moving to Firebird. Which are the cons? I can also plan to support both platforms, but this will backfire I fear.

    Read the article

  • How can I prevent MEF from throwing an exception when an Export is not found?

    - by Kilhoffer
    I have a class with an IList<T> property decorated with the [ImportMany(allowRecomposition = true)] attribute. There are some conditions in which the application might not find any available Exports of the requested type. Right now, it's throwing a CompositionException if no Exports of the requested type are found. I dislike application flow being determined by thrown exceptions, so I would rather not catch and react in this case. Rather, I just want program execution to continue. Is there a flag or something I can set to make this Import optional? I know for single import properties, you can do this: [Import(AllowDefault = true)] but 'AllowDefault' is not an option on the ImportMany attribute.

    Read the article

  • Does XercesC contain an extensive logic of XMLSchema validation?

    - by seas
    Tried to implement a small XML validation tool with XercesC. For some reason I cannot use existing validators right from the box - I need some preprocessing and would like to combine it with validation in a single tool. I used DOM parser and specified DOMErrorHandler. Instead of a set of errors with detailed messages like I saw from xmllint for the same xml and xmlschema files, only one message appeared that document has a wrong structure without details. Probably, I did something wrong. But also assume XercesC doesn't contain xmllint functionality right from the box. Does anybody can give me a hint before I spent too much time? Thanks.

    Read the article

  • mySQL one-to-many query

    - by Stomped
    I've got 3 tables that are something like this (simplified here ofc): users user_id user_name info info_id user_id rate contacts contact_id user_id contact_data users has a one-to-one relationship with info, although info doesn't always have a related entry. users has a one-to-many relationship with contacts, although contacts doesn't always have related entries. I know I can grab the proper 'users' + 'info' with a left join, is there a way to get all the data I want at once? For example, one returned record might be: user_id: 5 user_name: tom info_id: 1 rate: 25.00 contact_id: 7 contact_data: 555-1212 contact_id: 8 contact_data: 555-1315 contact_id: 9 contact_data: 555-5511 Is this possible with a single query? Or must I use multiple?

    Read the article

  • MX Records - go to two servers?

    - by Jim Beam
    Right now I have a single mail server for IMAP. Let's say I want to introduce Exchange but not all users will be on it. Some users will be on my "legacy" IMAP, others on the "new" Exchange. Is it possible to "split up" your users (from the same e-mail domain) on two services like this? What would the MX records look like? My guess is that this isn't possible, but thought I'd ask. By the way, I realize that Exchange can offer IMAP and all that, but my question is more about splitting users across services and the MX records. The actual protocols above are only examples.

    Read the article

  • Python - Converting CSV to Objects - Code Design

    - by victorhooi
    Hi, I have a small script we're using to read in a CSV file containing employees, and perform some basic manipulations on that data. We read in the data (import_gd_dump), and create an Employees object, containing a list of Employee objects (maybe I should think of a better naming convention...lol). We then call clean_all_phone_numbers() on Employees, which calls clean_phone_number() on each Employee, as well as lookup_all_supervisors(), on Employees. import csv import re import sys #class CSVLoader: # """Virtual class to assist with loading in CSV files.""" # def import_gd_dump(self, input_file='Gp Directory 20100331 original.csv'): # gd_extract = csv.DictReader(open(input_file), dialect='excel') # employees = [] # for row in gd_extract: # curr_employee = Employee(row) # employees.append(curr_employee) # return employees # #self.employees = {row['dbdirid']:row for row in gd_extract} # Previously, this was inside a (virtual) class called "CSVLoader". # However, according to here (http://tomayko.com/writings/the-static-method-thing) - the idiomatic way of doing this in Python is not with a class-fucntion but with a module-level function def import_gd_dump(input_file='Gp Directory 20100331 original.csv'): """Return a list ('employee') of dict objects, taken from a Group Directory CSV file.""" gd_extract = csv.DictReader(open(input_file), dialect='excel') employees = [] for row in gd_extract: employees.append(row) return employees def write_gd_formatted(employees_dict, output_file="gd_formatted.csv"): """Read in an Employees() object, and write out each Employee() inside this to a CSV file""" gd_output_fieldnames = ('hrid', 'mail', 'givenName', 'sn', 'dbcostcenter', 'dbdirid', 'hrreportsto', 'PHFull', 'PHFull_message', 'SupervisorEmail', 'SupervisorFirstName', 'SupervisorSurname') try: gd_formatted = csv.DictWriter(open(output_file, 'w', newline=''), fieldnames=gd_output_fieldnames, extrasaction='ignore', dialect='excel') except IOError: print('Unable to open file, IO error (Is it locked?)') sys.exit(1) headers = {n:n for n in gd_output_fieldnames} gd_formatted.writerow(headers) for employee in employees_dict.employee_list: # We're using the employee object's inbuilt __dict__ attribute - hmm, is this good practice? gd_formatted.writerow(employee.__dict__) class Employee: """An Employee in the system, with employee attributes (name, email, cost-centre etc.)""" def __init__(self, employee_attributes): """We use the Employee constructor to convert a dictionary into instance attributes.""" for k, v in employee_attributes.items(): setattr(self, k, v) def clean_phone_number(self): """Perform some rudimentary checks and corrections, to make sure numbers are in the right format. Numbers should be in the form 0XYYYYYYYY, where X is the area code, and Y is the local number.""" if self.telephoneNumber is None or self.telephoneNumber == '': return '', 'Missing phone number.' else: standard_format = re.compile(r'^\+(?P<intl_prefix>\d{2})\((?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})') extra_zero = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})') missing_hyphen = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})(?P<local_second_half>\d{4})') if standard_format.search(self.telephoneNumber): result = standard_format.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), '' elif extra_zero.search(self.telephoneNumber): result = extra_zero.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Extra zero in area code - ask user to remediate. ' elif missing_hyphen.search(self.telephoneNumber): result = missing_hyphen.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Missing hyphen in local component - ask user to remediate. ' else: return '', "Number didn't match recognised format. Original text is: " + self.telephoneNumber class Employees: def __init__(self, import_list): self.employee_list = [] for employee in import_list: self.employee_list.append(Employee(employee)) def clean_all_phone_numbers(self): for employee in self.employee_list: #Should we just set this directly in Employee.clean_phone_number() instead? employee.PHFull, employee.PHFull_message = employee.clean_phone_number() # Hmm, the search is O(n^2) - there's probably a better way of doing this search? def lookup_all_supervisors(self): for employee in self.employee_list: if employee.hrreportsto is not None and employee.hrreportsto != '': for supervisor in self.employee_list: if supervisor.hrid == employee.hrreportsto: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = supervisor.mail, supervisor.givenName, supervisor.sn break else: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not found.', 'Supervisor not found.', 'Supervisor not found.') else: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not set.', 'Supervisor not set.', 'Supervisor not set.') #Is thre a more pythonic way of doing this? def print_employees(self): for employee in self.employee_list: print(employee.__dict__) if __name__ == '__main__': db_employees = Employees(import_gd_dump()) db_employees.clean_all_phone_numbers() db_employees.lookup_all_supervisors() #db_employees.print_employees() write_gd_formatted(db_employees) Firstly, my preamble question is, can you see anything inherently wrong with the above, from either a class design or Python point-of-view? Is the logic/design sound? Anyhow, to the specifics: The Employees object has a method, clean_all_phone_numbers(), which calls clean_phone_number() on each Employee object inside it. Is this bad design? If so, why? Also, is the way I'm calling lookup_all_supervisors() bad? Originally, I wrapped the clean_phone_number() and lookup_supervisor() method in a single function, with a single for-loop inside it. clean_phone_number is O(n), I believe, lookup_supervisor is O(n^2) - is it ok splitting it into two loops like this? In clean_all_phone_numbers(), I'm looping on the Employee objects, and settings their values using return/assignment - should I be setting this inside clean_phone_number() itself? There's also a few things that I'm sorted of hacked out, not sure if they're bad practice - e.g. print_employee() and gd_formatted() both use __dict__, and the constructor for Employee uses setattr() to convert a dictionary into instance attributes. I'd value any thoughts at all. If you think the questions are too broad, let me know and I can repost as several split up (I just didn't want to pollute the boards with multiple similar questions, and the three questions are more or less fairly tightly related). Cheers, Victor

    Read the article

  • sql server 2005 reporting services-- how to use multiple datasets in report

    - by larryq
    Hi everyone, I'm new to SQL Server reporting services, and am trying to decipher an existing report. It's nothing too bad, but I notice it does have two report datasets defined. (They are generated via separate stored procedures) I'm trying to figure out where and how the report datasets are linked together so the Fields collection has both sets of columns available and the report has a single rowset to traverse. Is there a section in the report layout where a joining of datasets is defined? I'm using Visual Studio 2005 to design and preview the report fwiw. Thanks for your help!

    Read the article

  • How to continuously fade a BitmapData to a certain color?

    - by Cay
    I'm drawing stuff on a bitmapData and I need to continuously fade every pixel to 0x808080 while still drawing (its for a DisplacementMapFilter)... I think this illustrates my problem: http://www.ventdaval.com/lab/grey.swf The simpler approach I tried was drawing a semi-transparent grey box on it, but it never reaches a single color (i.e. 0x808081 will never be turned to 0x808080)... The same kind of thing happens with a ColorMatrixFilter trying to progressively reduce the saturation and/or contrast. (the example above is applying a filter with -10 contrast every frame). I'm trying paletteMap now, and I'm sensing this could be the way to go, but I haven't been able to get it... any ideas?

    Read the article

  • Office 2010: It&rsquo;s not just DOC(X) and XLS(X)

    - by andrewbrust
    Office 2010 has released to manufacturing.  The bits have left the (product team’s) building.  Will you upgrade? This version of Office is officially numbered 14, a designation that correlates with the various releases, through the years, of Microsoft Word.  There were six major versions of Word for DOS, during whose release cycles came three 16-bit Windows versions.  Then, starting with Word 95 and counting through Word 2007, there have been six more versions – all for the 32-bit Windows platform.  Skip version 13 to ward off folksy bad luck (and, perhaps, the bugs that could come with it) and that brings us to version 14, which includes implementations for both 32- and 64-bit Windows platforms.  We’ve come a long way baby.  Or have we? As it does every three years or so, debate will now start to rage on over whether we need a “14th” version the PC platform’s standard word processor, or a “13th” version of the spreadsheet.  If you accept the premise of that question, then you may be on a slippery slope toward answering it in the negative.  Thing is, that premise is valid for certain customers and not others. The Microsoft Office product has morphed from one that offered core word processing, spreadsheet, presentation and email functionality to a suite of applications that provides unique, new value-added features, and even whole applications, in the context of those core services.  The core apps thus grow in mission: Excel is a BI tool.  Word is a collaborative editorial system for the production of publications.  PowerPoint is a media production platform for for live presentations and, increasingly, for delivering more effective presentations online.  Outlook is a time and task management system.  Access is a rich client front-end for data-driven self-service SharePoint applications.  OneNote helps you capture ideas, corral random thoughts in a semi-structured way, and then tie them back to other, more rigidly structured, Office documents. Google Docs and other cloud productivity platforms like Zoho don’t really do these things.  And there is a growing chorus of voices who say that they shouldn’t, because those ancillary capabilities are over-engineered, over-produced and “under-necessary.”  They might say Microsoft is layering on superfluous capabilities to avoid admitting that Office’s core capabilities, the ones people really need, have become commoditized. It’s hard to take sides in that argument, because different people, and the different companies that employ them, have different needs.  For my own needs, it all comes down to three basic questions: will the new version of Office save me time, will it make the mundane parts of my job easier, and will it augment my services to customers?  I need my time back.  I need to spend more of it with my family, and more of it focusing on my own core capabilities rather than the administrative tasks around them.  And I also need my customers to be able to get more value out of the services I provide. Help me triage my inbox, help me get proposals done more quickly and make them easier to read.  Let me get my presentations done faster, make them more effective and make it easier for me to reuse materials from other presentations.  And, since I’m in the BI and data business, help me and my customers manage data and analytics more easily, both on the desktop and online. Those are my criteria.  And, with those in mind, Office 2010 is looking like a worthwhile upgrade.  Perhaps it’s not earth-shattering, but it offers a combination of incremental improvements and a few new major capabilities that I think are quite compelling.  I provide a brief roundup of them here.  It’s admittedly arbitrary and not comprehensive, but I think it tells the Office 2010 story effectively. Across the Suite More than any other, this release of Office aims to give collaboration a real workout.  In certain apps, for the first time, documents can be opened simultaneously by multiple users, with colleagues’ changes appearing in near real-time.  Web-browser-based versions of Word, Excel, PowerPoint and OneNote will be available to extend collaboration to contributors who are off the corporate network. The ribbon user interface is now more pervasive (for example, it appears in OneNote and in Outlook’s main window).  It’s also customizable, allowing users to add, easily, buttons and options of their choosing, into new tabs, or into new groups within existing tabs. Microsoft has also taken the File menu (which was the “Office Button” menu in the 2007 release) and made it into a full-screen “Backstage” view where document-wide operations, like saving, printing and online publishing are performed. And because, more and more, heavily formatted content is cut and pasted between documents and applications, Office 2010 makes it easier to manage the retention or jettisoning of that formatting right as the paste operation is performed.  That’s much nicer than stripping it off, or adding it back, afterwards. And, speaking of pasting, a number of Office apps now make it especially easy to insert screenshots within their documents.  I know that’s useful to me, because I often document or critique applications and need to show them in action.  For the vast majority of users, I expect that this feature will be more useful for capturing snapshots of Web pages, but we’ll have to see whether this feature becomes popular.   Excel At first glance, Excel 2010 looks and acts nearly identically to the 2007 version.  But additional glances are necessary.  It’s important to understand that lots of people in the working world use Excel as more of a database, analytics and mathematical modeling tool than merely as a spreadsheet.  And it’s also important to understand that Excel wasn’t designed to handle such workloads past a certain scale.  That all changes with this release. The first reason things change is that Excel has been tuned for performance.  It’s been optimized for multi-threaded operation; previously lengthy processes have been shortened, especially for large data sets; more rows and columns are allowed and, for the first time, Excel (and the rest of Office) is available in a 64-bit version.  For Excel, this means users can take advantage of more than the 2GB of memory that the 32-bit version is limited to. On the analysis side, Excel 2010 adds Sparklines (tiny charts that fit into a single cell and can therefore be presented down an entire column or across a row) and Slicers (a more user-friendly filter mechanism for PivotTables and charts, which visually indicates what the filtered state of a given data member is).  But most important, Excel 2010 supports the new PowerPIvot add-in which brings true self-service BI to Office.  PowerPivot allows users to import data from almost anywhere, model it, and then analyze it.  Rather than forcing users to build “spreadmarts” or use corporate-built data warehouses, PowerPivot models function as true columnar, in-memory OLAP cubes that can accommodate millions of rows of data and deliver fast drill-down performance. And speaking of OLAP, Excel 2010 now supports an important Analysis Services OLAP feature called write-back.  Write-back is especially useful in financial forecasting scenarios for which Excel is the natural home.  Support for write-back is long overdue, but I’m still glad it’s there, because I had almost given up on it.   PowerPoint This version of PowerPoint marks its progression from a presentation tool to a video and photo editing and production tool.  Whether or not it’s successful in this pursuit, and if offering this is even a sensible goal, is another question. Regardless, the new capabilities are kind of interesting.  A greatly enhanced set of slide transitions with 3D effects; in-product photo and video editing; accommodation of embedded videos from services such as YouTube; and the ability to save a presentation as a video each lay testimony to PowerPoint’s transformation into a media tool and away from a pure presentation tool. These capabilities also recognize the importance of the Web as both a source for materials and a channel for disseminating PowerPoint output. Congruent with that is PowerPoint’s new ability to broadcast a slide presentation, using a quickly-generated public URL, without involving the hassle or expense of a Web meeting service like GoToMeeting or Microsoft’s own LiveMeeting.  Slides presented through this broadcast feature retain full color fidelity and transitions and animations are preserved as well.   Outlook Microsoft’s ubiquitous email/calendar/contact/task management tool gains long overdue speed improvements, especially against POP3 email accounts.  Outlook 2010 also supports multiple Exchange accounts, rather than just one; tighter integration with OneNote; and a new Social Connector providing integration with, and presence information from, online social network services like LinkedIn and Facebook (not to mention Windows Live).  A revamped conversation view now includes messages that are part of a given thread regardless of which folder they may be stored in. I don’t know yet how well the Social Connector will work or whether it will keep Outlook relevant to those who live on Facebook and LinkedIn.  But among the other features, there’s very little not to like.   OneNote To me, OneNote is the part of Office that just keeps getting better.  There is one major caveat to this, which I’ll cover in a moment, but let’s first catalog what new stuff OneNote 2010 brings.  The best part of OneNote, is the way each of its versions have managed hierarchy: Notebooks have sections, sections have pages, pages have sub pages, multiple notes can be contained in either, and each note supports infinite levels of indentation.  None of that is new to 2010, but the new version does make creation of pages and subpages easier and also makes simple work out of promoting and demoting pages from sub page to full page status.  And relationships between pages are quite easy to create now: much like a Wiki, simply typing a page’s name in double-square-brackets (“[[…]]”) creates a link to it. OneNote is also great at integrating content outside of its notebooks.  With a new Dock to Desktop feature, OneNote becomes aware of what window is displayed in the rest of the screen and, if it’s an Office document or a Web page, links the notes you’re typing, at the time, to it.  A single click from your notes later on will bring that same document or Web page back on-screen.  Embedding content from Web pages and elsewhere is also easier.  Using OneNote’s Windows Key+S combination to grab part of the screen now allows you to specify the destination of that bitmap instead of automatically creating a new note in the Unfiled Notes area.  Using the Send to OneNote buttons in Internet Explorer and Outlook result in the same choice. Collaboration gets better too.  Real-time multi-author editing is better accommodated and determining author lineage of particular changes is easily carried out. My one pet peeve with OneNote is the difficulty using it when I’m not one a Windows PC.  OneNote’s main competitor, Evernote, while I believe inferior in terms of features, has client versions for PC, Mac, Windows Mobile, Android, iPhone, iPad and Web browsers.  Since I have an Android phone and an iPad, I am practically forced to use it.  However, the OneNote Web app should help here, as should a forthcoming version of OneNote for Windows Phone 7.  In the mean time, it turns out that using OneNote’s Email Page ribbon button lets you move a OneNote page easily into EverNote (since every EverNote account gets a unique email address for adding notes) and that Evernote’s Email function combined with Outlook’s Send to OneNote button (in the Move group of the ribbon’s Home tab) can achieve the reverse.   Access To me, the big change in Access 2007 was its tight integration with SharePoint lists.  Access 2010 and SharePoint 2010 continue this integration with the introduction of SharePoint’s Access Services.  Much as Excel Services provides a SharePoint-hosted experience for viewing (and now editing) Excel spreadsheet, PivotTable and chart content, Access Services allows for SharePoint browser-hosted editing of Access data within the forms that are built in the Access client itself. To me this makes all kinds of sense.  Although it does beg the question of where to draw the line between Access, InfoPath, SharePoint list maintenance and SharePoint 2010’s new Business Connectivity Services.  Each of these tools provide overlapping data entry and data maintenance functionality. But if you do prefer Access, then you’ll like  things like templates and application parts that make it easier to get off the blank page.  These features help you quickly get tables, forms and reports built out.  To make things look nice, Access even gets its own version of Excel’s Conditional Formatting feature, letting you add data bars and data-driven text formatting.   Word As I said at the beginning of this post, upgrades to Office are about much more than enhancing the suite’s flagship word processing application. So are there any enhancements in Word worth mentioning?  I think so.  The most important one has to be the collaboration features.  Essentially, when a user opens a Word document that is in a SharePoint document library (or Windows Live SkyDrive folder), rather than the whole document being locked, Word has the ability to observe more granular locks on the individual paragraphs being edited.  Word also shows you who’s editing what and its Save function morphs into a sync feature that both saves your changes and loads those made by anyone editing the document concurrently. There’s also a new navigation pane that lets you manage sections in your document in much the same way as you manage slides in a PowerPoint deck.  Using the navigation pane, you can reorder sections, insert new ones, or promote and demote sections in the outline hierarchy.  Not earth shattering, but nice.   Other Apps and Summarized Findings What about InfoPath, Publisher, Visio and Project?  I haven’t looked at them yet.  And for this post, I think that’s fine.  While those apps (and, arguably, Access) cater to specific tasks, I think the apps we’ve looked at in this post service the general purpose needs of most users.  And the theme in those 2010 apps is clear: collaboration is key, the Web and productivity are indivisible, and making data and analytics into a self-service amenity is the way to go.  But perhaps most of all, features are still important, as long as they get you through your day faster, rather than adding complexity for its own sake.  I would argue that this is true for just about every product Microsoft makes: users want utility, not complexity.

    Read the article

  • Adding value to an Input field on click

    - by Wazdesign
    I have this structure on form, <input type="test" value="" id="username" /> <span class="input-value">John Smith</span> <a href="#" class="fill-input">Fill Input</a> when user click on the Fill Input , the data from span which has class input-value will be added to value, after clicking a tag the code should be look like this, <input type="test" value="john Smith" id="username" /> <span class="input-value">John Smith</span> <a href="#" class="fill-input">Fill Input</a> there are many forms element/input on the single page.

    Read the article

  • .NET DataGridView rows' default alignment all set to left

    - by Xster
    DataGridView gets bound to DataTable. I then try to set some columns' DefaultCellStyle.Alignment to right. It doesn't work. After a bunch of tinkering, it turns out that every single row's DefaultCellStyle.Alignment is set to left and overrides it. DataGridView.DefaultCellStyle or RowsDefaultCellStyle doesn't do anything. Makes sense since a particular row's DefaultCellStyle is more specific but what gives. Why are they all set to something and not NotSet? How do I get rid of it instead of iterating through all rows?

    Read the article

  • Interpreting Search Results

    - by Simon
    Hi all, I am tasked with writing a program that, given a search term and the HTML source of a page representing search results of some unknown search engine (it can really be anything, a blog, a shop, Google, eBay, ...), needs to build a data structure of the results containing "what's in the results": a title for earch result, the "details" link, the position within the results etc. It is not known whether the results page contains any of the data at all, and whether there are any search results. The goal is to feed the data structure into another program that extracts meaning. What I am looking for is not BeautifulSoup or a RegExp but rather some clever ideas or algorithms on how to interpret the HTML source. What do I do to find out what part of the page constitutes a single result item? How do I filter the markup noise to extract the important bits? What would you do? Pointers to fields of research covering what I try to to are aly greatly appreciated. Thanks, Simon

    Read the article

  • Excel 2007 Visual Basic Editor: eats spaces, throws cursor around

    - by Vincent
    I can't resolve this issue, I found a similar question here but: setting the workbook to Manual calculation (alt-m-x-m or alt-t-oformulas) didn't work Setting editor options to disable: Auto syntax check & Background compile didn't work anybody have any idea how to fix this very annoying behaviour, I'm used to quickly pop up VBA (alt-f11), f7 to get into code and write some quick procedures there... and it's hard to get out of that habit, I don't want to write any office extension to just add a single quote to every cell in the range For Each rg In Selection rg = chr(39) & rg.value Next F5, done...

    Read the article

  • Sending POST variables through a PHP proxy for AJAX?

    - by b. e. hollenbeck
    I've decided that using a PHP proxy for AJAX calls for a project is the best way to go - but I have a question regarding passing POST data through the proxy. As in - how to do it. Should I need to create a single variable in the javascript using alternate characters, then have the PHP proxy parse and modify the variable to reassemble a valid HTTP request? Or is there some means of passing along the $_POST array in new request to the external server by pulling the data out of the headers and re-sending it?

    Read the article

< Previous Page | 563 564 565 566 567 568 569 570 571 572 573 574  | Next Page >